See How Data Engineering Gets Done on Our Do-It-Yourself Data Webcast Series

Start Free

Speed up your data preparation with Trifacta

Free Sign Up
 

Batch Processing

All Glossary Terms

Batch processing refers to the scheduling and processing of large volumes of data simultaneously, generally at periods of time when computing resources are experiencing low demand. Batch jobs are typically repetitive in nature and are often scheduled (automated) to occur at set intervals, such as at the end of the day or the end of the week. This is opposed to stream processing, where data is continuously fed into a system as soon as it becomes available. An example of batch processing is credit card transactions, which are typically pushed to account statements together overnight rather than populating to individual accounts instantaneously. Batching large database updates together allows for efficient use of processing resources without interrupting day-to-day business operations.

Use Trifacta to Automate Batch Processing

Trifacta makes it easy to automate the data transformation steps in data pipelines that use batch processing. After using Trifacta’s intuitive interface to prepare your data, you can save your transformation steps as recipes and include them as part of your automated data pipelines.

Explore Trifacta Today
More Data Engineering Terms
Data Streaming Data Application Data Pipeline