Start Free

Speed up your data preparation with Designer Cloud powered by Trifacta

Free Sign Up
All Blog Posts

Trifacta Partners with Databricks to Deliver Faster ROI on Data Lakehouses

April 7, 2021

Today, we’re excited to announce that Trifacta now natively integrates with the Databricks Lakehouse Platform. As a quintessential data lakehouse, the Databricks Lakehouse Platform is streamlined, open, and capable of supporting a wide range of analytics workloads and data types. For those unfamiliar with the term, a “lakehouse” combines elements of both a data lake and a traditional data warehouse and can simplify a multiple-system setup that includes a data lake, several data warehouses, and other specialized systems. With the integration of Trifacta, Databricks Lakehouse Platform users can now accelerate the process of developing and orchestrating data pipelines on their data lakehouse.

No matter how an organization chooses to modernize their analytics processes—whether that be with cloud data lakes, warehouses or lakehouses—building data pipelines that feed high-quality data to business applications is essential to its success. Historically, a small group of data engineers exclusively managed the process of setting up and maintaining data pipelines by using extremely code-intensive and labor-involved tools and processes. Now, data engineers are seeking out more agile and flexible options that prepare and move data into the hands of business users faster, such as Trifacta.

Trifacta is a visually-driven platform that accelerates the process of setting up data pipelines designed to transform and structure data as it moves data from one system to another. With the help of Trifacta, many organizations are even offloading the majority of this work to business workers themselves who, with the right context of the data, can make more informed decisions about how and where the data should be transformed.

Trifacta’s integration with Databricks also includes support for Delta Lake enabling data workers to accelerate the process of refining data feeding analytic workloads. Any transformation executed in Trifacta is translated into runtime Spark code that executes via Databricks, while Databricks automatically scales processing execution based on parameters of the transformation job. The result is faster, more reliable data pipelines that work as hard as analysts do.

The integration with the Databricks Lakehouse Platform is another leap forward in our many partnerships with cloud data warehouses and data lakes. From day one, Trifacta was built to transform any data, no matter where it lives, and we’re pleased to be able to adapt with our customers as they increasingly adopt cloud solutions.

Trifacta offers support for all major cloud platforms; Databricks is available on Microsoft Azure and Amazon AWS. Get started with Trifacta for free today by signing up here.