Want to Transform Your Data and Transform Your Business? Welcome to DIY Data from Trifacta!
Do-It-Yourself Data is all about practical data engineering to help data engineers and data analysts collaboratively profile, prepare, and pipeline your data
Want to Transform Your Data and Transform Your Business? Welcome to DIY Data from Trifacta!
Do-It-Yourself Data is all about practical data engineering to help data engineers and data analysts collaboratively profile, prepare, and pipeline your data
Upcoming Episodes: Register for the next episode
Stay tuned for the Season 3 premiere coming in June 2022
Previous Episodes: Catch recordings of past webcasts
Misagh Jebeli
Senior Data Engineer
Paul Warburg
Data-Driven Product Marketer
Episode 8
Closing the Analytics Gap: How to Drive Collaboration Between Data Analysts and Engineers
Through two seasons of DIY Data, we’ve shown how Trifacta by Alteryx enables self-service data engineering across several different use cases and architectures, while also demonstrating how Trifacta makes it easy to deliver enterprise-grade data pipelines.
Watch RecordingAmit Miller
Data Wrangling Engineer, Trifacta
Episode 7
Delivering Insights with Agility: How to Prepare and Deliver Data Without Writing Code
In this episode of DIY Data, we’ll demonstrate how data practitioners can build and orchestrate fully-functional data pipelines, from ingestion to delivery, without writing a single line of code.
Watch RecordingEric Thomas
Sales Engineer, Trifacta
Episode 6
Clean Data, Clean Teeth with Brushing! How to Get Quick, Quality Data Insights from Complex JSON Files
How does brushing work in Trifacta? Let's dive deep into 3 core examples and talk about the scale of transformations with Trifacta
Watch RecordingDavid Sabater Dinter
Outbound Product Manager for Data Analytics, Google Cloud
Episode 5
Accelerating your Data Journey with Google Cloud Analytics Design Patterns
Designing a new data warehouse on Google Cloud can feel like choice overload. There’s a variety of analytic services to consider that will help build your data pipelines, BI, and AI outcomes.
Watch RecordingSteve Mitchell
Solutions Engineer
Episode 4
Faster Data Engineering from Trifacta With Pushdown Optimization
As data architectures evolve towards the modern data stack with ELT approaches, the difficult work of transformation (the T in ELT) is pushed onto the cloud data warehouses (CDW) to leverage the scale and power of these data repositories.
Watch RecordingVijay Balasubramaniam
Director of Partner Solutions
Episode 3
How Trifacta Fits into Your Data Architecture - ETL, ELT, Custom Coding
Data engineering has evolved over the years to include various techniques such as ETL, ELT, or even custom code. Do you need to move your architecture to meet your data engineering needs? The answer is No. Find out why!
Connor Carreras
Director of SaaS Adoption & Enablement
Episode 2
How to Clean, Prepare, and Transform Data for Marketing and Sales Analytics
Delivering high-quality analytics requires scale, ease of use, and accuracy. In episode 2 of DIY Data, learn to use pre-built templates from Trifacta that can get you started right away to generate the right analytics for marketing and sales.
Misagh Jebeli
Senior Data Engineer, Trifacta
Episode 1
Analytics in the Cloud: Enabling the Modern Data Stack
Whether you are a Fortune 500 company or an emerging startup, data is the most relevant entity to build workflows and get advanced insights into your business. But, how do you build a data workflow that can be open, flexible, and scalable to keep up with the demands of the business and help obtain the required analytics?
Watch RecordingClosing the Analytics Gap: How to Drive Collaboration Between Data Analysts and Engineers
Through two seasons of DIY Data, we’ve shown how Trifacta by Alteryx enables self-service data engineering across several different use cases and architectures, while also demonstrating how Trifacta makes it easy to deliver enterprise-grade data pipelines.
In this season finale episode, we’ll tie it all together by showing how data engineers and analysts can collaborate together on a data analytics job within Trifacta, leveraging individual strengths to deliver a fully functioning data pipeline in minutes rather than hours. While working alongside one another, we’ll show you how:
- Data analysts can leverage their domain knowledge to prepare and transform the data using no-code predictive transformation suggestions and an intuitive user interface
- Data engineers can easily build connections to data sources and schedule the automation of data pipelines
Ready to see how Trifacta is uniting teams to achieve insightful analytics? Let’s go!
Misagh Jebeli
Senior Data Engineer
Paul Warburg
Data-Driven Product Marketer
Delivering Insights with Agility: How to Prepare and Deliver Data Without Writing Code
Code will always play a pivotal role in data pipelines. But even for skilled practitioners, writing code takes time – and brings the possibility of human error. For increased accuracy and efficiency, it makes sense to do as much no-code/low-code data engineering as possible – even if you’re code-proficient. But how much can you really do without code? More than you might think!
In this episode of DIY Data, Amit Miller, Data Wrangling Engineer, will demonstrate how data practitioners can build and orchestrate fully-functional data pipelines, from ingestion to delivery, without writing a single line of code. Watch as we use nothing but machine-learning-guided clicks to join datasets, transform data, and deliver insights to downstream applications to drive intelligent business outcomes. It’s simple, quick, and efficient!
Ready to start transforming your data with us? Let’s go!
Amit Miller
Data Wrangling Engineer
Clean Data, Clean Teeth with Brushing! How to Get Quick, Quality Data Insights from Complex JSON Files
In this episode of DIY Data, you’ll discover how you can prepare complex JSON data files with a few simple clicks through a technique known as “brushing.” We’ll demonstrate different aspects of data transformation, including extracting values in a query string, splitting values, replacing values, and filtering data based on your criteria.
How does brushing work? Just highlight the exact piece of data you want to transform and instantly get a ranked set of intelligent transformation suggestions that are automatically generated based on the selected data. Ease of use, speed, and accuracy are the hallmarks of Trifacta’s data engineering platform, with techniques such as brushing allowing you to perform complex transformations with a few simple clicks.
Eric Thomas
Sales Engineer, Trifacta
Accelerating your Data Journey with Google Cloud Analytics Design Patterns
Designing a new data warehouse on Google Cloud can feel like choice overload. There’s a variety of analytic services to consider that will help build your data pipelines, BI, and AI outcomes. And even after those choices are resolved, how do you integrate all these services? How do you model your data warehouse? Getting started requires time-consuming research, tests, and validation unless you can leverage the experience from others to guide you.
This is what the Google Cloud Design Patterns are all about. Design patterns are comprehensive guidelines that you can follow to implement and test an end-to-end solution in an instant.
Welcome to season two of Do-It-Yourself Data with Trifacta! In this next episode, we’ll show you how to design an analytics data warehouse for pricing analysis and optimization using Google Design patterns. Outbound Product Manager for Data Analytics on Google’s Cloud Platform, David Dinter, will demonstrate how to use Design Pattern that leverage:
- Dataprep
- BigQuery
- BigQuery ML
At the end of the session, you will get access to all the resources you need (GitHub, CodeLab, Demo) so you can try it for yourself!
Ready to start transforming your data with us? Let’s go!
David Sabater Dinter
Outbound Product Manager for Data Analytics, Google Cloud
David is Product Manager for Data Analytics on Google’s Cloud Platform. He helps to develop GCP product’s vision and strategy for the Data Analytics portfolio, but also works capturing the needs from customers to drive product excellence and helping them succeed.
Before Google, he was Associate Partner and Co-Founder of Data Reply UK. When he’s not at work he can be found with his family cycling around Greenwich Park, traveling or eating international food.
Faster Data Engineering from Trifacta With Pushdown Optimization
As data architectures evolve towards the modern data stack with ELT approaches, the difficult work of transformation (the T in ELT) is pushed onto the cloud data warehouses (CDW) to leverage the scale and power of these data repositories. But it’s easier said than done as data permeates from everywhere through disparate sources, making it cumbersome to push the transformation workload into your CDW without compromising on performance. Well, not anymore as Trifacta makes it possible with a solution that gives you flexibility, ease-of-use, speed, and performance.
In the season finale of the opening season of DIY Data, you’ll learn about the innovative Pushdown Optimization techniques from Trifacta that is designed to harness the power of the CDW achieving data transformation through a SQL-based ELT approach. We’ll demonstrate our latest Pushdown Optimization capabilities on Snowflake in the AWS cloud, where we’ll generate SQL scripts from Trifacta recipes delivering intelligent, high-quality, and AI-driven data transformations. You’ll also learn how this solution complements our pushdown abilities on other CDWs such as Google BigQuery.
Steve Mitchell
Solutions Engineer
How Trifacta Fits into Your Data Architecture- ETL, ELT, Custom Coding
Data engineering has evolved over the years to include various techniques such as ETL, ELT, or even custom code. Do you need to move your architecture to meet your data engineering needs? The answer is No. Trifacta offers you complete flexibility with an open and interactive approach to retain your data architecture and meet your data engineering requirements.
In this third episode of Do-It-Yourself Data, How Trifacta Fits into Your Data Architecture- ETL, ELT, Custom Coding, Director of Partner Solutions at Trifacta, Vijay Balasubramaniam, will dive deep into these different approaches and demonstrate:
- How Trifacta can support all three architectures: ETL, ELT, and Custom Code
- How to mix and match these architectures in Trifacta
- How to develop and orchestrate robust, automated data pipelines that combine the power of Trifacta’s interactive, visual interface while integrating with SQL and Python code
Vijay Balasubramaniam
Director of Partner Solutions
Vijay Balasubramaniam leverages his expertise in data management to help partners and customers be successful in large-scale analytics initiatives. He has over 18 years of experience helping large organizations manage their data sets and produce insights. He specializes in best-in-class data preparation workflows and developing end-to-end solutions on the AWS and GCP cloud platforms.
How to Clean, Prepare, and Transform Data for Marketing and Sales Analytics
Delivering high-quality analytics requires scale, ease of use, and accuracy.
In this second episode of Do-It-Yourself Data, How to Clean, Prepare, and Transform Data for Marketing and Sales Analytics, you’ll learn how to use pre-built templates from Trifacta to generate the right analytics for marketing and sales.
Director of Saas Adoption and Development at Trifacta, Connor Carreras, will demonstrate how you can convert raw data from a typical CRM application into usable data for insights. With a ready-made template, you’ll learn how Trifacta enables you to:
- Connect to your data
- Clean and transform raw data by standardizing data values and dropping unnecessary attributes
- Perform the required calculations for analysis
Connor will also show you how to use flows and recipes within Trifacta for advanced insights and analytics.
Ready to start transforming your data with us? Let’s go!
Connor Carreras
Director of SaaS Adoption & Enablement
Connor Carreras leads SaaS adoption and enablement efforts at Trifacta, where she leverages Trifacta’s centralized data warehouse for insights about customer behavior. She can write SQL and code (but finds both to be a pain), and as such, avoids calling herself a data engineer. Connor previously worked at Informatica and Microsoft.
Whether you are a Fortune 500 company or an emerging startup, data is the most relevant entity to build workflows and get advanced insights into your business. But, how do you build a data workflow that can be open, flexible, and scalable to keep up with the demands of the business and help obtain the required analytics?
Welcome to DIY Data from Trifacta! In this opening 30 minute episode of Do-It-Yourself Data, Analytics in the Cloud: Enabling the Modern Data Stack, we’ll focus on building efficient data workflows with the Modern Data Stack, an intelligent cloud-based framework that brings together data ingestion, data warehousing, and data analytics.
Senior Data Engineer, Misagh Jebeli, will show you how Trifacta enables the Modern Data Stack with practical examples of:
- Data on-boarding
- Pushing to the cloud data warehouse
- Achieving analytics at scale for your business
Ready to start transforming your data with us? Let’s go!
Misagh Jebeli
Senior Data Engineer, Trifacta
Misagh has more than a decade of experience developing and leading BI and Analytics projects for various software companies and consulting firms. He specializes in building enterprise data warehouses and analytics platforms with the goal of minimizing time to value for analytics.