Welcome to the Cloud Data Engineering course! This comprehensive 6-8 month journey is designed to equip you with the necessary skills to become a proficient Data Engineer, focusing on cloud-based technologies, data acquisition, modeling, warehousing, and orchestration. Our curriculum is divided into 5 modules that include hands-on projects, assignments, and real-world case studies to ensure a practical understanding of the technologies covered.
This repository include the Roadmap for Data Engineering. Since Data Engineering is a broad field we'll try to cover following tools.
VERSION: 1.3.0
- Course Overview
- Understanding Data Engineering
- Module 1: Data Acquisition
- Module 2: Data Modeling
- Module 3: Cloud Data Warehousing
- Module 4: Data Orchestration & Streaming
- Module 5: Architecting AWS Data Engineering Projects
- Why These Technologies?
- Final Notes
This course is meticulously crafted to cover all facets of Cloud Data Engineering. You'll learn everything from the basics of data acquisition and transformation to advanced cloud-based data warehousing, orchestration, and streaming techniques. The course is structured to build your skills progressively, ensuring you are ready to tackle complex data engineering challenges by the end.
Before starting the digging deep into the field of Data Engineering one should know what is Data Engineering, What is the Scope of Data ENgineering in 2024, what tools are required to have knowledge for data engineering.
The focus of this module is on acquiring, manipulating, and processing data from various sources. You'll set up your data engineering environment, explore Python for data manipulation, manage projects with version control, and gain hands-on experience with web scraping using BeautifulSoup and Selenium.
-
Introduction to Data Engineering + Python Setup
- Objective: Understand the fundamentals of Data Engineering and Python setup.
- Outcome: Be proficient in setting up Python and tools for data handling.
It is recommended to watch the video at 1.5x or 2.0x speed. Some concepts are covered way to
-
Python for Data Engineering (Numpy + Pandas) + Case Study
- Objective: Data manipulation using Pandas.
- Case Study: Clean and analyze real-world datasets using Pandas.
-
Version Control (Git + Python Project)
- Objective: Learn Git for version control and collaboration.
- Assignment: Create a Python project and push it to GitHub.
-
Bash/Shell Scripting
Bash/Shell scripting and Linux commands are vital in a Cloud Data Engineering roadmap due to their automation capabilities, essential for tasks like data processing and infrastructure management. Proficiency ensures flexibility, troubleshooting skills, and compatibility with cloud platforms. Cost optimization through efficient resource usage and the ability to streamline version control and deployment processes further emphasizes their importance.
- Objective: Automate repetitive tasks using Shell scripts.
- Assignment: Automate a data acquisition task using Bash.
You're responsible for the security of a server, which involves monitoring a log file named security.log. This file records security-related events, including successful and failed login attempts, file access violations, and network intrusion attempts. Your goal is to analyze this log file to extract crucial security insights.
Create a sample log file named
security.log
with the following format:2024-03-29 08:12:34 SUCCESS: User admin login 2024-03-29 08:15:21 FAILED: User guest login attempt 2024-03-29 08:18:45 ALERT: Unauthorized file access detected 2024-03-29 08:21:12 SUCCESS: User admin changed password 2024-03-29 08:24:56 FAILED: User root login attempt 2024-03-29 08:27:34 ALERT: Possible network intrusion detected.
-
Docker w.r.t Data Engineering
Docker is integral to a Cloud Data Engineering roadmap for its ability to encapsulate data engineering environments into portable containers. This ensures consistency across development, testing, and production stages, facilitating seamless deployment and scaling of data pipelines. Docker's lightweight nature optimizes resource utilization, enabling efficient utilization of cloud infrastructure. Moreover, it promotes collaboration by simplifying the sharing of reproducible environments among team members, enhancing productivity and reproducibility in data engineering workflows.
-
Web Scraping with BeautifulSoup
- Objective: Extract data from static websites.
- Assignment: Scrape data from a webpage and save it in CSV/JSON format.
-
Web Scraping with Selenium
- Objective: Scrape data from dynamic websites.
- Assignment: Create a Selenium script to scrape data from an e-commerce site.
Dive into database design, emphasizing efficient data storage, retrieval, and optimization. Learn SQL for data manipulation and advanced querying techniques.
In this tutorial, you will learn to install SQL Server 2022 Developer Edition and SQL Server Management Studio (SSMS).
SQL Server & SSMS Installation
-
SQL Fundamentals with SQL Server
- Basic operations:
SELECT
,WHERE
,ORDER BY
- Data integrity using constraints
- Basic operations:
-
Data Definition and Manipulation
- Create and alter database structures using DDL
- Modify data within tables using DML
-
Advanced Querying Techniques
- Aggregations,
GROUP BY
, and set operations likeUNION
,INTERSECT
,EXCEPT
- Use
CUBE
andROLLUP
for multidimensional analysis
- Aggregations,
-
Joining Data
- Mastering
INNER
,LEFT
,RIGHT
, andFULL OUTER
joins
- Mastering
-
Performance and Structure
- Query optimization with indexes
- Utilizing
VIEWS
andSUBQUERIES
-
Advanced SQL Concepts
- Use Common Table Expressions (CTEs) and Window Functions
-
Encapsulating Logic
- Writing
STORED PROCEDURES
and usingTRIGGERS
- Writing
Master cloud-based data warehousing with Snowflake, focusing on scalability and handling large datasets. Gain hands-on experience through badge assignments and projects.
-
Snowflake Overview
- Setting up your Snowflake environment
-
Badges Assignment
- Badge 1: Data Warehousing Workshop
- Badge 2: Collaboration, Marketplace & Cost Estimation Workshop
- Badge 3: Data Application Builders Workshop
- Badge 4: Data Lake Workshop
- Badge 5: Data Engineering Workshop
-
Snowflake Masterclass (Udemy)
- Working through 5 sections of the course to solidify understanding Snowflake – The Complete Masterclass
- Project -3 Snowflake real time Data Warehouse for beginners
- Project -4 Change Data Captue Pipeline with Snowflake & AWS
Explore data pipeline management with Apache Airflow and real-time data streaming with Apache Kafka.
When we have a Data Pipeline & we want to trigger it on daily basis so we need some kind of automation or orchestration tool that can automate our orchestration part. for that purposes Airflow is the quite adopted choice to learn that why we have airflow in our roadmap.
- Project -4 Twitter Data Pipeline using Airflow
- Project -5 Automate a python ETL pipeline with airflow on AWS EC2
- Project -6 Deploying Airflow with Docker
When data is coming in the real-time fashion & suppose we don't have end destination ready to consume that data or let say any diaster happen. In this case we'll lose our data. This itroduce the need of de-coupling tool that can seperate both produce ends of the data & consumer end of the & act as mediator.
Dive deep into architecting data engineering solutions using AWS services. This module covers a wide range of tools from data storage, ETL, real-time data processing, to serverless computing.
AWS is crucial in a Cloud Data Engineering roadmap due to its comprehensive suite of services tailored for data processing, storage, and analytics. Leveraging AWS allows data engineers to build scalable and cost-effective data pipelines using services like S3, Glue, and EMR. Integration with other AWS services enables advanced analytics, machine learning, and real-time processing capabilities, empowering data engineers to derive valuable insights from data. Furthermore, AWS certifications validate expertise in cloud data engineering, enhancing career prospects and credibility in the industry.
- AWS Redshift
- Build and manage cloud-based data warehouses
- AWS S3
- Store and manage data efficiently
- AWS Glue & Athena
- Master ETL processes and querying
- AWS Lambda
- Automate workflows with serverless functions
- AWS EC2
- Manage compute resources for data processing
- AWS RDS
- Relational database management
- AWS Kinesis
- Real-time data streaming solutions
- Project -8 Batch Data Pipeline Using S3, lambda & Cloud Watch
- Project -9 ETL pipeline using Glue, Athena & S3
- Project -10 Super Store Data Analysis Using Glue & Quick Sight
- Project -11 Extract and Transform Redfin data with AWS EMR
- Project -12 End-To-End Data Engineering Project
The technologies selected in this course are widely used in the data engineering industry. Python, SQL, Snowflake, Apache Airflow, and AWS are among the most in-demand skills, ensuring that you are job-ready by the end of this course. Each module builds upon the previous one, enabling you to apply theoretical knowledge to real-world projects.
Throughout the course, you will engage in hands-on projects and assignments that simulate real-world data engineering tasks. This practical experience is crucial for mastering the skills required to excel in the field of Cloud Data Engineering.
Get ready to embark on this exciting journey of becoming a proficient Cloud Data Engineer! 🚀