Skip to main content
Version: 0.5

Making a Prediction

In this topic, you will:

  • Serve the model for inference.
  • Run code to call the model to make a prediction. When the model is called, it is sent the inference feature data.

Serving the model for inference​

note

The remainder of this tutorial uses Databricks Managed MLflow to serve a MLflow model for inference using a REST endpoint. But in general, you can use the system of your choice to serve a model for inference.

An MLflow model is a separate object from the rf model that you trained earlier, in Creating and Training a Model. As such, after creating a MLflow model, you will need to point your MLflow model to the rf model.

Creating an MLflow model​

In MLflow, serve a model from a REST endpoint by following these steps:

  1. In the Databricks Web UI, select the Machine Learning view on the upper-left of the navigation pane.

  2. On the lower part of the navigation pane, select Models.

  3. Click Create Model, give the model a name, and click Create.

Enable model serving​

After creating an MLflow model (the previous step), select the Serving tab on the top of the screen. Click Enable Serving.

Associate the MLflow model with the model you have trained​

To associate the MLflow model with the model you have trained, first log the model you trained to an MLflow run. Then, set the MLflow model to point to that run.

Log the model you trained to an MLflow run​

import mlflow
import mlflow.sklearn
from sklearn.metrics import mean_squared_error

with mlflow.start_run() as run:

# Log parameters
mlflow.log_param("num_trees", n_estimators)
mlflow.log_param("maxdepth", max_depth)
mlflow.log_param("max_feat", max_features)
mlflow.log_param("tecton_feature_service", "fraud_detection_feature_service")

# Log the model that you trained (rf)
mlflow.sklearn.log_model(rf, "random-forest-model")

# Log metrics
mlflow.log_metric("mse", mse)

The output will read: Logged 1 run to an experiment in MLflow., where run is a link. Click on link.

Setting the MLflow model to point to the MLflow run​

On the right side, click Register Model. Select your MLflow model and click Register.

Return to the Registered Models screen by clicking on Models on the lower part of the navigation pane. Then select your model and go to the Serving tab. On the left side, you will see a list of model versions. Since this is a new model, there is only one version. For this version, copy the Model URL located on the right, to use later. This will be the <production model URL> that you use in code, later.

note

The model status must be Ready before you can call it (the next step).

Calling the model (sending it the feature data for inference) to make a prediction​

info

For the purpose of this tutorial, run the code below in a notebook. To use this code in production, add this code in the appropriate location in your application.

The code below uses the <production model URL>, which is the URL that you copied in the last section. To authenticate to the URL, the code retrieves a token that is stored in a Databricks secret, using dbutils.secrets.get(). For more information, see these these topics in the Databricks documentation:

import pandas as pd
import requests


def get_prediction_from_model(dataset):
headers = {"Authorization": f"Bearer " + dbutils.secrets.get(scope="<scope name>", key="<key name>")}
response = requests.request(
method="POST",
headers=headers,
url="<production model URL>",
json=inference_feature_data_model_format,
)
if response.status_code != 200:
raise Exception(f"Request failed with status {response.status_code}, {response.text}")
return response.json()


# Call the above function, sending inference_feature_data_model_format as input.
# inference_feature_data_model_format was generated previously,
# in the Read Feature Data for Inference topic of this tutorial.
prediction = get_prediction_from_model(inference_feature_data_model_format)

# Display the prediction
print(prediction)

Sample output:

{'predictions': [0.0]}
note

The prediction value will between 0 and 1. The higher the value, the higher the probability of the transaction being fraudulent. The value can be exactly 0 or 1, because this is the behavior of the RandomForestRegressor model that made the prediction.

Deleting your live workspace​

caution

Deleting a workspace removes all feature definitions and materialized feature data from the workspace.

After completing the previous section, disable materialization to prevent compute costs. The easiest way is to delete the live workspace which you created in the Enabling Materialization topic.

tecton workspace delete <live workspace name>

Output:

Deleted workspace "<live workspace name>".
Switched to workspace "prod".

The tecton workspace delete command switches to the prod workspace after the delete of the requested workspace occurs. You may wish to switch to another workspace using tecton workspace select.

info

As an alternative to deleting the live workspace, you could disable materialization by setting the values of the online and offline parameters in all Feature Views to False.

Tutorial complete​

You have now completed this fundamentals tutorial. Congratulations! To continue learning how to use Tecton, consult the left navigation bar to find your topics of interest.

Was this page helpful?

Happy React is loading...