Skip to main content
Version: 0.5

Machine Learning Application Concepts

tip

This page explains ML application concepts that are helpful for understanding Tecton and Feature Platforms. It is not intended to be an introduction to machine learning.

If you are already a domain expert, you can skip to Tecton Concepts.

Fundamentals​

Machine learning applications make automated decisions based on predictions made from models.

Models are created from algorithms that train on historical examples of particular outcomes we are looking to predict. For example, to train a model that can predict if a transaction is fraudulent, we need a data set of examples of fraudulent and non-fraudulent transactions. However, to train a model we also need to extend our training examples with features.

Features are the measurable data points that a model uses to predict an outcome. Features are created by Data Scientists and Machine Learning Engineers, often based on their domain expertise. To predict if a transaction is fraudulent, we may want to try features like:

  • How does the given transaction amount compare to a user's historical average transaction amount?
  • How many transactions has the user made in the last day?
  • Where is the location of the user making this transaction?

Types of Machine Learning Applications​

There are different types of machine learning applications:

  • Analytical: When predictions are being used in non-production environments by analysts creating reports or dashboards. These predictions help drive human decision making.
  • Operational: When predictions are being used to automate real-time decisions in production software applications.

Tecton focuses on operational machine learning applications.

Operational machine learning applications have some of the strictest and most complex requirements because they affect production applications and direct users. Latency SLAs, uptime, and DevOps best practices (code reviews, CI/CD, etc.) are critical elements of these applications.

Two Environments for Operational Machine Learning Applications​

There are two environments where operational machine learning applications can run:

  1. Online: The online environment is where the production software application that users interact with runs. This is where real-time decisions are made.
  2. Offline: The offline environment is a non-production environment where large amounts of historical data are stored and large-scale distributed computing is run. This is where Data Scientists and Machine Learning Engineers design and test features, as well as train models. Offline environments contain data lakes and data warehouses.

Model Training Environments​

Model training is almost always done in the offline environment, where a model has access to large historical datasets and large-scale compute.

info

There are some advanced operational machine learning applications that continuously train models in the online environment in real-time. This is known as "online training", "online learning", or "continual learning".

Model Inference Environments​

Offline Model Inference: Offline model inference is when model predictions are made in large batches in the offline environment. If you want to use offline predictions in an online application, predictions are then written to a database in the online environment where they can be looked up in real-time.

Online Model Inference: Online model inference is when model predictions are made in real-time in the online environment. This may happen as a user makes a transaction or searches for a product. Online model inference is very powerful because it can incorporate fresh feature data. This allows models to adapt to real-time changes such as ongoing user behavior in an application.

info

There are two main reasons that online model inference is used in an operational machine learning application:

  1. When it is beneficial or necessary to incorporate fresh feature data from sources such as streams, operational databases, device data, or user input.
  2. When it is inefficient to precompute all possible predictions. For example, if you have a large set of users, but only 10 percent are active users, you may not want to repeatedly compute recommendations for the full user base. Instead, you can compute recommendations in real-time when they visit the application.

Offline and Online Feature Retrieval​

Offline Feature Retrieval: Offline feature retrieval is when features are fetched in batches in the offline environment for offline model training or offline model inference. Model training requires fetching historically accurate feature values for a set of training events (e.g. fraudulent and non-fraudulent transactions). Offline model inference requires fetching batches of the latest feature values for the set of entities you want to generate predictions for (e.g. the set of users for which to generate product recommendations).

Online Feature Retrieval: Online feature retrieval is when features are fetched in the online environment at low-latency to run online inference.

Was this page helpful?

Happy React is loading...