Skip to main content
Version: Beta 🚧

Reading Feature Data for Inference

Reading feature data for inference is the first step in making a prediction.

There are two ways to read feature data for inference:

Feature data can only be read for inference in a live workspace. Currently, you are using the live workspace that you created in the Enabling Materialization topic.

Read feature data for inference by calling Tecton's HTTP API

Before calling the HTTP API, you will need to create an API key.

Creating an API key to authenticate to the HTTP API

To authenticate your requests to the HTTP API, you will need to create a Service Account to obtain an API key, and grant that Service Account the Consumer role for your workspace:

  1. Create a Service Account to obtain your API key.
tecton service-account create \
--name "sample-service-acount" \
--description "An online inference sample"


Save this API Key - you will not be able to get it again.
API Key: <Your-api-key>
Service Account ID: <Your-Service-Account-Id>
  1. Assign the Consumer role to your Service Account.
tecton access-control assign-role --role consumer \
--workspace <Your-workspace> \
--service-account <Your-Service-Account-Id>


Successfully updated role.
  1. Export the API key as an environment variable named TECTON_API_KEY or add the key to your secret manager.
export TECTON_API_KEY="<Your-api-key>"

Calling the HTTP API in Python

The following is Python code for calling the HTTP API. This code is equivalent to the code used in the previous section, where the HTTP API was called using curl.

The code uses a Databricks secret to store the HTTP API key and retrieves the secret using dbutils.secrets.get(). For more information, see Secret Management in the Databricks documentation.

In your notebook, run the following code:

import requests

headers = {"Authorization": "Tecton-key " + dbutils.secrets.get(scope="<scope name>", key="<key name>")}

request_data = """{
"params": {
"feature_service_name": "fraud_detection_feature_service",
"join_key_map": {
"user_id": "user_469998441571"
"metadata_options": {
"include_names": true
"request_context_map": {
"amt": 12345678.9,
"merch_lat": 30,
"merch_long": 35
"workspace_name": "<live workspace name>"

inference_feature_data = requests.request(

Sample output:


Format the feature data for inference

The value of inference_feature_data, which was populated in the last section, needs to be converted into the format that the model is expecting:

{'dataframe_split': {'index': [0], 'data': [['1', 10720.821094068539, 'Visa', 40.1566, -95.9311, None, '17', '56']], 'columns': ['transaction_amount_is_high__transaction_amount_is_high', 'transaction_distance_from_home__dist_km', 'user_credit_card_issuer__user_credit_card_issuer', 'user_home_location__lat', 'user_home_location__long', 'user_transaction_counts__transaction_id_count_1d_1d', 'user_transaction_counts__transaction_id_count_30d_1d', 'user_transaction_counts__transaction_id_count_90d_1d']}}

The following code converts inference_feature_data to the needed format. The result is stored in inference_feature_data_model_format.

import json

inference_feature_data_json = json.loads(inference_feature_data.text)

inference_feature_data_builder = {}
inference_feature_data_builder["index"] = [0]
inference_feature_data_builder["data"] = []
feature_column_names = inference_feature_data_json["metadata"]["features"]
inference_feature_data_builder["columns"] = []

for feature_name in feature_column_names:
# The replace function on then next line replaces the . with __, to match the
# format of the columns the model was training with (the format
# returned by get_historical_features()).
inference_feature_data_builder["columns"].append(feature_name["name"].replace(".", "__"))

inference_feature_data_model_format = {}
inference_feature_data_model_format["dataframe_split"] = inference_feature_data_builder


Example output:

{'dataframe_split': {'index': [0], 'data': [['1', 10720.821094068539, 'Visa', 40.1566, -95.9311, None, '17', '56']], 'columns': ['transaction_amount_is_high__transaction_amount_is_high', 'transaction_distance_from_home__dist_km', 'user_credit_card_issuer__user_credit_card_issuer', 'user_home_location__lat', 'user_home_location__long', 'user_transaction_counts__transaction_id_count_1d_1d', 'user_transaction_counts__transaction_id_count_30d_1d', 'user_transaction_counts__transaction_id_count_90d_1d']}}

You will use inference_feature_data_model_format in the next section, when you get a prediction from the model.

Was this page helpful?

Happy React is loading...