Goto

Collaborating Authors

 tecton


Tecton raises $100M, proving that the MLOps market is still hot – TechCrunch

#artificialintelligence

Machine learning can provide companies with a competitive advantage by using the data they're collecting -- for example, purchasing patterns -- to generate predictions that power revenue-generating products (e.g. But it's difficult for any one employee to keep up with -- much less manage -- the massive volumes of data being created. That poses a problem, given AI systems tend to deliver superior predictions when they're provided up-to-the-minute data. Systems that aren't regularly retrained on new data run the risk of becoming "stale" and less accurate over time. Fortunately, an emerging set of practices dubbed "MLOps" promises to simplify the process of feeding data to systems by abstracting away the complexities.


Tecton Reports Record Demand for Its Machine Learning Feature Platform as It Raises $100 Million in Funding Led by Kleiner Perkins With Participation from Strategic Investors Databricks and Snowflake Ventures as well as Andreessen Horowitz, Bain Capital Ventures, Sequoia Capital and Tiger Global

#artificialintelligence

"We believe that any company should be able to develop reliable operational ML applications and easily adopt real-time capabilities no matter the use case at hand or the engineering resources on staff. This new funding will help us further build and strengthen both Tecton's feature platform for ML and the Feast open source feature store, enabling organizations of all sizes to build and deploy automated ML into live, customer-facing applications and business processes, quickly and at scale," said Mike Del Balso, co-founder and CEO of Tecton. Tecton was founded by the creators of Uber's Michelangelo platform to make world-class ML accessible to every company. Tecton is a fully-managed ML feature platform that orchestrates the complete lifecycle of features, from transformation to online serving, with enterprise-grade SLAs. The platform enables ML engineers and data scientists to automate the transformation of raw data, generate training data sets and serve features for online inference at scale.


Why the market for feature stores is exploding – TechCrunch

#artificialintelligence

"Feature stores," with their dreary and opaque moniker, might not sound like the sexiest subject. That's why they're attracting an increasing amount of attention and investment from venture firms, which see the market opportunity growing into the distant future. AI systems are made up of many components, one of which is features. Features are the individual variables that act like inputs in the system. In thinking about features, it can be helpful to visualize a table, where the data used by AI systems is organized into rows of examples (data from which the system learns to make predictions) and columns of attributes (data describing those examples).


Feature Stores for Real-time AI & Machine Learning - KDnuggets

#artificialintelligence

Real-time AI/ML use cases such as fraud prevention and recommendations are on the rise, and feature stores play a key role in deploying them successfully to production. According to popular open source feature store Feast, one of the most common questions users ask in their community Slack is: how scalable / performant is Feast? This is because the most important characteristic of a feature store for real-time AI/ML is the feature serving speed from the online store to the ML model for online predictions or scoring. Successful feature stores can meet stringent latency requirements (measured in milliseconds), consistently (think p99) and at scale (up to 100Ks of queries per second and even million of queries per second, and with gigabytes to terabytes sized datasets) while at the same time maintaining a low total cost of ownership and high accuracy. As we will see in this post, the choice of online feature store as well as the architecture of the feature store play important roles in determining how performant and cost effective it is.


Top 10 Coolest Machine Learning Tools One Should Know About in 2021

#artificialintelligence

Machine learning tools help enterprises to understand the trends in customer behavior and business operational patterns, as well as support the development of new products. Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values. Big Squid's Kraken AutoML is an automated machine learning platform for building and deploying machine learning models for business analytics – including within existing analytics stacks – without the need to write code. Kraken's no-code capabilities simplify the adoption of machine learning and AI, helping data analysts, data scientists, data engineers, and business users collaborate on machine learning and predictive analytics projects.


Real-Time Aggregation Features for Machine Learning (Part 1)

#artificialintelligence

Machine Learning features are derived from an organization's raw data and provide a signal to an ML model. A very common type of feature transformation is a rolling time window aggregation. For example, you may use the rolling 30-minute transaction count of a credit card to predict the likelihood that a given transaction is fraudulent. It's easy enough to calculate rolling time window aggregations offline using window functions in a SQL query against your favorite data warehouse. However, serving this type of feature for real-time predictions in production poses a difficult problem: How can you efficiently serve such a feature that aggregates a lot of raw events ( 1000s), at a very high scale ( 1000s QPS), at low serving latency ( 100ms), at high freshness ( 1s) and with high feature accuracy?


What is a Feature Store? - Tecton

#artificialintelligence

Data teams are starting to realize that operational machine learning requires solving data problems that extend far beyond the creation of data pipelines. In Tecton's previous post, Why We Need DevOps for ML Data, we highlighted some of the key data challenges that teams face when productionizing ML systems. However, operational machine learning -- ML-driven intelligence built into customer-facing applications -- is new for most teams. A new kind of ML-specific data infrastructure is emerging to make that possible. Increasingly Data Science and Data Engineering teams are turning towards feature stores to manage the data sets and data pipelines needed to productionize their ML applications.


Video Highlights: Accelerating the ML Lifecycle with an Enterprise-Grade Feature Store - insideBIGDATA

#artificialintelligence

Productionizing real-time ML models poses unique data engineering challenges for enterprises that are coming from batch-oriented analytics. Enterprise data, which has traditionally been centralized in data warehouses and optimized for BI use cases, must now be transformed into features that provide meaningful predictive signals to our ML models. Enterprises face the operational challenges of deploying these features in production: building the data pipelines, then processing and serving the features to support production models. ML data engineering is a complex and brittle process that can consume upwards of 80% of our data science efforts, all too often grinding ML innovation to a crawl. Based on experience building the Uber Michelangelo platform, and currently building next-generation ML infrastructure for Tecton.ai, the presentation shares insights on building a feature platform that empowers data scientists to accelerate the delivery of ML applications. Spark and DataBricks provide a powerful and massively scalable foundation for data engineering. Building on this foundation, a feature platform extends your data infrastructure to support ML-specific requirements. It enables ML teams to track and share features with a version-control repository, process and curate feature values to have a single source of centralized data, and instantly serve features for model training, batch, and real-time predictions.


Tecton.ai emerges from stealth with $20M Series A to build machine learning platform – TechCrunch

#artificialintelligence

Three former Uber engineers, who helped build the company's Michelangelo machine learning platform, left the company last year to form Tecton.ai and build an operational machine learning platform for everyone else. Today the company announced a $20 million Series A from a couple of high-profile investors. Today's investment combined with the seed they used to spend the last year building the product comes to $25 million. But when you have the pedigree of these three founders -- CEO Mike Del Balso, CTO Kevin Stumpf and VP of Engineering Jeremy Hermann all helped build the Uber system -- investors will spend some money, especially when you are trying to solve a difficult problem around machine learning. The Michelangelo system was the machine learning platform at Uber that looked at things like driver safety, estimated arrival time and fraud detection, among other things.