See How Data Engineering Gets Done on Our Do-It-Yourself Data Webcast Series

Start Free

Speed up your data preparation with Trifacta

Free Sign Up
 

A machine learning (ML) pipeline is a sequence of automated steps used to train and deploy a machine learning model. These steps typically include data extraction, data processing, model training, model deployment, model validation, and model re-training, with the last three steps being continuously repeated to iterate and improve models.

By splitting machine learning workflows into these independent steps, teams are able to update or reuse individual pieces without breaking or having to rewrite the entire pipeline. For example, by simply swapping out individual steps, teams can apply different source data to an existing model, update data processing procedures, or train new models against a set of data that has already been pre-processed. The flexibility of ML pipelines helps to improve efficiency and quality.

Explore Trifacta Today
More Data Engineering Terms
AutoML Data Drift MLOps