Skip to main content
Version: 0.5

How to manage Tecton versions and safely upgrade

We’re constantly improving Tecton and adding new capabilities with each release. That being said, there can still be breaking changes or unexpected issues with each minor release (if you haven’t seen our versioning policies, see Versioning Scheme).

To minimize production outages and surprises, we recommend reading release notes / upgrade guides and following these steps:

  1. (Setup) Pin versions of Tecton
  2. Test the new Tecton version on a staging workspace
  3. Upgrade production Tecton clusters and inference pipelines

1) How to pin Tecton versions

There are multiple places that Tecton manages versions. Tecton recommends pinning the minor versions, but allowing for automatic upgrades for patch versions (e.g. 0.5.*) This ensures you get any low-risk bug fixes automatically, but can adopt new minor versions safely.

Generally, this means pinning the Tecton version (pip install tecton==0.5.*, pinning Tecton Spark udfs) and also pinning Python versions (e.g. to Python 3.8).


If you will be running unit tests, use one of the following commands, instead of pip install tecton==0.5.*, to pin the Tecton version:

  • To install with Pyspark 3.1: pip install 'tecton[pyspark]==0.5'
  • To install with Pyspark 3.2: pip install 'tecton[pyspark3.2]==0.5'
  • To install with Pyspark 3.3: pip install tecton==0.5 pyspark==3.3
TypeDescriptionAction items
CLIThe main place to update is in CI/CD (to use tecton apply, to rotate API keys, run tests)Pin the Tecton pip version and Python versions (e.g. in the Docker image used for CI/CD)
Offline feature retrieval (including notebook environments)This is how you connect to Tecton for offline feature retrieval. This can be for training data generation or batch scoring.

In Tecton on Spark (Databricks / EMR), you setup a notebook Spark cluster which needs to be updated. If you’re using the public preview Athena retrieval, then you need to track users of that API too.

In Tecton on Snowflake, you can retrieve features in any Python environment (e.g. potentially via a Jupyter notebook server)
Tecton on Spark:
As per initial setup in Upgrading the Tecton SDK on Databricks Notebook Clusters and Upgrading the Tecton SDK on EMR, you should pin versions of Tecton libraries + jars at the cluster level.

You may also want to pin the Python version too. This can be achieved with a specific Databricks Runtime (which installs a specific version of Python) or a custom AMI for EMR.

EMR only: in an EMR notebook, users also select a specific version via %%configure cells.

Tecton on Spark (Athena retrieval):
Pin any production inference or training jobs that leverage Tecton’s Athena feature retrieval.

Tecton on Snowflake:
Ensure production inference or training jobs pin dependencies (e.g. pip install 'tecton[snowflake]==0.5.*' and Python version = 3.8).

If you use a Jupyter notebook server like Sagemaker notebooks, make sure you pin Tecton versions in the servers.
[Tecton Managed] Materialization jobsTecton launches jobs (e.g. Spark clusters) to materialize feature data to the offline and online store.

Spark only: Each cluster uses a specific version of EMR / Databricks, and a Tecton-internal materialization library version.
No action.

Starting in Tecton 0.6, you’ll be able to also pin EMR / Databricks versions in these jobs.
[Tecton Managed] Tecton internal Spark clusterTecton uses this internal cluster to power tecton plan and tecton apply.No action.

Tecton automatically upgrades versions of this.

2) Test the new Tecton version

Usually, as users go to production, they’ll have different workspaces for different stages of a launch. A good practice is to have a staging workspace that mimics the prod live workspace (perhaps processing less data) which can be used to test changes. Below, we walk through how to test a new Tecton version on such a staging workspace. Note that it is OK for a Tecton cluster to have different Tecton versions on different workspaces.

There are two options for upgrading:

  • [Faster, cheaper, less safe] Rely on tecton plan to do an in-place upgrade on staging workspaces. This is sufficient for most releases.
  • [Slower, more expensive, very safe] Do an A/B test using two different staging workspaces to validate that feature values are the same across Tecton versions.

This is a multiple step process that aims to upgrade an existing staging workspace and check for unexpected issues. Follow any version specific upgrade guide with the below in mind.

Faster approach: In-place upgrade

  1. Upgrade your local machine’s version of Tecton.
  2. Run tecton plan on an existing staging live workspace that is equivalent to the production workspace. Follow the version upgrade guide until tecton plan produces an expected diff (i.e. no re-materialization, no errors).
    1. This should usually indicate no need to re-materialize features (which can be costly). If it does, consult the version specific upgrade guide or contact support.
    2. If there are any deprecated fields or breaking changes, follow the version specific upgrade guide to resolve
    3. Tecton may upgrade some definitions to be compatible with the latest version, as described in the plan output.
    4. (if necessary) run tecton plan --suppress-recreates when Tecton’s upgrade guide recommends it
  3. (Optional) If there are a lot of changes in the above, consider using the slower A/B test approach to be safe.
  4. On the staging workspace, run tecton apply of the final plan id from step 2. (i.e. tecton apply --plan-id <plan-id>) when you’re confident you’ve made all the right changes.
  5. Upgrade training / batch inference jobs’ Tecton versions
  6. Validate that there are no potential production impacting issues, especially with batch inference jobs, training jobs, or (rare) re-materialization jobs for feature views.
  7. If not already done, update your staging notebook environments to use the latest version of Tecton (as per above table).

Slower approach: Run an A/B test

One way to do this is to have two different Tecton workspaces that have the same feature definitions. Upgrade the version on one workspace to the latest Tecton version. Also, create a new notebook cluster that uses the Tecton version. You can then A/B test the new Tecton SDK version and verify the outputs of get_historical_featuresare the same for a feature service.

Example test on Spark:

  • Control environment (existing)
    • Staging workspace applied using Tecton 0.5.7
    • Notebook cluster using Tecton 0.5.7, which calls get_historical_features
  • Treatment environment (upgrading)
    • Workspace applied using Tecton 0.6
    • Notebook cluster using Tecton 0.6, which calls get_historical_features
  • Expected outcome: The result of get_historical_features on the same spine should be identical. Be careful that the spine is deterministically generated (e.g read from S3) otherwise the results may differ.

3) Upgrade production Tecton clusters and inference pipelines

  1. Follow the previous instructions for all production workspaces.
  2. Update your CI/CD pipelines to use the latest version of Tecton.
  3. Update your production training / inference jobs to use the latest version of Tecton. This likely requires updating your notebook environments as well (per above table).
  4. Validate that there are no potential production impacting issues, especially with batch inference jobs, training jobs, or (rare) re-materialization jobs for feature views.

Was this page helpful?

Happy React is loading...