Become a Data Engineer
Data Engineering is the foundation for the new world of Big Data. Enroll now to build production-ready data infrastructure, an essential skill for advancing your data career.
ESTIMATED TIME, 5 Months
At 5 hrs/week
ENROLL BY, August 14, 2019
Get access to classroom immediately on enrollment
Intermediate Python & SQL
Intermediate Python programming knowledge, of the sort gained through the Programming for Data Science Nanodegree program, other introductory programming courses or programs, or additional real-world software development experience. Including:
- Strings, numbers, and variables; statements, operators, and expressions;
- Lists, tuples, and dictionaries; Conditions, loops;
- Procedures, objects, modules, and libraries;
- Troubleshooting and debugging; Research & documentation;
- Problem solving; Algorithms and data structures
This content is also available in the Introduction to Python Programming course.
Intermediate SQL knowledge and linear algebra mastery, addressed in the Programming for Data Science Nanodegree program, including:
- Joins, Aggregations, and Subqueries
- Table definition and manipulation (Create, Update, Insert, Alter)
This content is also available in the SQL for Data Analysis course.
BUILT IN COLLABORATION WITH
What You Will Learn
Learn to design data models, build data warehouses and data lakes, automate data pipelines, and work with massive datasets. At the end of the program, you’ll combine your new skills by completing a capstone project.
5 months to complete
To be successful in this program, you should have intermediate Python and SQL skills.See detailed requirements.
Learn to create relational and NoSQL data models to fit the diverse needs of data consumers. Use ETL to build databases in PostgreSQL and Apache Cassandra.
DATA MODELING WITH POSTGRES DATA MODELING WITH APACHE CASSANDRA
Cloud Data Warehouses
Sharpen your data warehousing skills and deepen your understanding of data infrastructure. Create cloud-based data warehouses on Amazon Web Services (AWS).
BUILD A CLOUD DATA WAREHOUSE
Spark and Data Lakes
Understand the big data ecosystem and how to use Spark to work with massive datasets. Store big data in a data lake and query it with Spark.
BUILD A DATA LAKE
Data Pipelines with Airflow
Schedule, automate, and monitor data pipelines using Apache Airflow. Run data quality checks, track data lineage, and work with data pipelines in production.
DATA PIPELINES WITH AIRFLOW
Combine what you’ve learned throughout the program to build your own data engineering portfolio project.
DATA ENGINEERING CAPSTONE
Learn with the best
DEVELOPER ADVOCATE AT DATASTAX
Amanda is a developer Advocate for DataStax after spending the last 6 years as a Software Engineer on 4 different distributed databases. Her passion is bridging the gap between customers and engineering. She has degrees from University of Washington and Santa Clara University.
STAFF ENGINEER AT SPOTHERO
In his career as an engineer, Ben Goldberg has worked in fields ranging from Computer Vision to Natural Language Processing. At SpotHero, he founded and built out their Data Engineering team, using Airflow as one of the key technologies.
CEO AT NOVELARI & ASSISTANT PROFESSOR AT NILE UNIVERSITY
Sameh is the CEO of Novelari, lecturer at Nile University, and the American University in Cairo (AUC) where he lectured on security, distributed systems, software engineering, blockchain and BigData Engineering.
DATA ENGINEER AT WOLT
Olli works as a Data Engineer at Wolt. He has several years of experience on building and managing data pipelines on various data warehousing environments and has been a fan and active user of Apache Airflow since its first incarnations.
VP OF ENGINEERING AT INSIGHT
David is VP of Engineering at Insight where he enjoys breaking down difficult concepts and helping others learn data engineering. David has a PhD in Physics from UC Riverside.
DATA ENGINEER AT SPLIT
Judit was formerly an instructor at Insight Data Science helping software engineers and academic coders transition to DE roles. Currently, she is a Data Engineer at Split where she works on the statistical engine of their full-stack experimentation platform.
CURRICULUM LEAD AT UDACITY
Juno is the curriculum lead for the School of Data Science. She has been sharing her passion for data and teaching, building several courses at Udacity. As a data scientist, she built recommendation engines, computer vision and NLP models, and tools to analyze user behavior.
GET STARTED WITH
Data Engineer Nanodegree Program