Skip to main content
Version: 0.6

Reading Feature Data

Reading feature data from Tecton enables you to use engineered features in your machine learning applications and pipelines. There are several main use cases for reading feature data:

  • Training: You read historical feature data to generate training data and train a model.
  • Inference: An application reads either online or offline features and provides them to a production model to generate predictions.
  • Testing: During feature development, you read feature data to verify that features are working as expected.

This overview provides context on these use cases, outlines the methods available for reading features in each scenario, and points you to relevant documentation with implementation details and examples.

Training​

To generate training data from your Tecton feature store, you read historical feature data using the Feature Services API via the get_historical_features() method. You provide a "spine" DataFrame containing the keys and timestamps for the samples you want to include, and Tecton returns a DataFrame with the feature values joined on. These values are point-in-time correct, meaning no future data is inadvertently included.

See Constructing Training Data for full details.

Inference​

For online inference, you have a few options:

  • Use the Tecton HTTP API to fetch single feature vectors at low latency.
  • Use the Java Client Library, a wrapper for the HTTP API that handles best practices.
  • Subscribe your application to Feature View Output Streams to receive feature updates asynchronously.
  • For offline batch inference, read historical features like you would for training using get_historical_features().

See Reading Feature Data for Inference for full details on the methods and options available.

Testing​

During development, the Python SDK can be used to read either online or offline features in a notebook for testing purposes. However, the Python SDK is not suitable for production inference workloads.

See Reading Online Features for Inference using the Python SDK (for Testing) or Reading Offline Features for Inference for implementation details.

Was this page helpful?

Happy React is loading...