What Is ETL? ETL vs. ELT vs. Data Wrangling in the Cloud

Technology   |   Bertrand Cariou   |   Dec 21, 2020

Is ETL dead? Did ELT take over or is something new taking its place? It’s a question that has come up a lot in recent years as organizations modernize their analytics infrastructure. Huge shifts are underfoot in the analytics landscape and it isn’t always clear where this change leaves ETL. The short answer? No, ETL is not dead. But the ETL pipeline looks different today than it did a few decades ago. Organizations might not need to ditch ETL entirely, but they do need to closely evaluate its current role and understand how it could be better utilized to fit within a modern analytics landscape. 

In this post, we’ll dig into the challenges of traditional ETL, how it compares to ELT, and how organizations are supplementing ETL tools and processes with data preparation technologies. Keep on reading to learn more, or download our ebook on the “death” of ETL, “EOL for ETL? The Future of Data Wrangling in the Cloud.

The Trouble with Traditional ETL in a Modern Organization

First, what exactly does ETL mean? ETL refers to three steps (extract, transform, load) used to integrate data from multiple sources into a centralized repository. Roughly 25 years ago, ETL tools were created to automate much of the tedious coding required to retrieve and cleanse data. At the time, ETL was designed to handle data that was generally well-structured, often originating from a variety of operational systems or databases the organization wanted to report against. Specific ETL pipelines were built for a specific set of users. And the end-result was successful—the productivity gains from ETL versus writing code by hand were undeniable. 

Today, much of the architecture and data surrounding ETL has changed. No longer are data warehouses the common end target, but, more often, data lakes. The data itself has become much bigger and messier. And even the use cases, which were historically clearly-defined, have grown experimental in nature. Perhaps the biggest difference is that instead of providing data for a few business groups, ETL pipelines are expected to serve a huge variety of users across an organization. Each of these users require different data that has been cleansed and transformed differently. But there’s one commonality—they all want the data fast, and the amount of use cases they’re working with are growing exponentially. 

Traditional ETL pipelines have struggled to extend support for the self-service agility required by these emerging analytics use cases. ETL tools were built for IT users, not business users, which often leaves business users waiting in line to get data cleaned, passing specs back and forth until they’ve received their desired output. Meanwhile, IT teams, once considered the target end user for all data operations, are struggling to offload some of the cleansing and standardization tasks found in ETL that business users are begging to take on. Ironically, many organizations now consider ETL pipelines the bottleneck in their analytics efforts—much the same way they looked at code 25 years ago. 

ETL vs ELT: Decoupling ETL with ELT

Traditional ETL might be considered a bottleneck, but that doesn’t mean it’s invaluable. The same basic challenges that ETL tools and processes were designed to solve still exist, even if many of the surrounding factors have changed. For example, at a fundamental level, organizations still need to extract (E) data from legacy systems and load (L) it into their data lake. And they still need to transform (T) that data for use in analytics projects. “ETL” work needs to get done—but what can change is the order in which it is achieved and new technologies that can support this work. 

Instead of an ETL pipeline, many organizations are taking an “ELT” approach. So what is ETL? ETL is a traditional type of data integration, and it stands for extract, transform, load. Data is extracted from its source, converted into a usable format, and loaded into a system for analysis. Most often analysts use this process to build data warehouses. ETL became popular in the 1970s and remained popular through the 90s, but with modern data innovations, including the cloud, its prevalence has diminished. With some of these innovations, companies have also adapted a similar process called ELT. ELT stands for extract, load, transform. We’ll examine why it’s often better to use ELT and load before transforming. 

This ELT approach follows a larger IT trend. Whereas IT architecture was historically built in monolithic silos, many organizations are decoupling the same components so that they function independently with the ELT process. Decoupled technologies means less work up front (stacks don’t need to be deployed understanding all possible uses and outcomes) and more efficient maintenance. A clean separation between data movement and data preparation (often through the ELT process) also comes with its own specific benefits: 

  • Less friction. The person or process loading the data isn’t responsible for transforming it to spec at load time. Postponing transformation until after data is loaded creates incentive for sourcing and sharing data. It also preserves the raw fidelity of the data.
  • More control. Loading data into a shared repository enables IT to manage all of an organization’s data under a single API and authorization framework. At least at the granularity of files, there is a single point of control.
  • More flexibility and transparency. Information can be lost as raw data is “boiled down” for a specific use case. By contrast, untransformed data can be reused for different purposes and leaves a record for auditing and compliance.

Supplementing Data Transformation with Data Preparation

Decoupling the ETL process with ELT is a significant step. But many organizations are going even further. Not only are they transforming their ETL pipeline into ELT, but replacing the “T” (transform) with data preparation platforms. Why? Because decoupling ETL has many benefits, but in and of itself, it still doesn’t address the core reason why traditional ETL has become a bottleneck—the high demand from business users for access to data. ELT can’t solve this problem either.

Data preparation solutions empower a new set of users to access data, explore it to assess its content and quality, and prepare it for use—while even handling some of the transformation facilities of traditional ETL. Data prep is designed for the users, unlike ETL. Data preparation platforms are built for business users, not IT, and incorporate visualization techniques and machine learning in order to make the data transformation process as intuitive as possible. Examples of modern data preparation platforms include Alteryx Designer Cloud and Google Cloud Dataprep, which allow any data professional to transform the data they need while accelerating the total time spent preparing data by up to 90%. Data prep and ETL in the cloud can combine the benefits of both processes for a modern environment. 

New Beginnings with ETL 

The core problems that ETL was built to solve still exist today, and for that reason it remains an important component in many analytics architectures. But organizations need to retire legacy ETL approaches which do not (and were never designed to) meet the needs of business users. Supplementing ETL steps with a data preparation platform is the best way to ensure that business users have the data they need, when they need it, while still partnering with IT. 

To learn more about how ETL and data preparation should work hand-in-hand and the new order of operations that organizations are instituting, download our ebook on the “death” of ETL, “EOL for ETL? The Future of Data Wrangling in the Cloud.” 

Tags