العربية
  • Free & Easy Returns
  • Best Deals
العربية
loader
Wishlist
wishlist
Cart
cart

Data Engineering with Apache Spark, Delta Lake, and Lakehouse : Create scalable pipelines that ingest, curate, and aggregate complex data in a timely and secure way

Now:
AED 201.00 Inclusive of VAT
Free Delivery
noon-marketplace
Get it by 7 - 8 March
Order in 3 h 23 m
VIP ENBD Credit Card

VIP card

Earn 5% cashback with the Mashreq noon Credit Card. Apply now

Delivery 
by noon
Delivery by noon
High Rated
Seller
High Rated Seller
Cash on 
Delivery
Cash on Delivery
Secure
Transaction
Secure Transaction
1
1 Added to cart
Add To Cart
Overview
Specifications
ISBN 139781801077743
ISBN 101801077746
AuthorManoj Kukreja
Book FormatPaperback
LanguageEnglish
Book DescriptionUnderstand the complexities of modernday data engineering platforms and explore strategies to deal with them with the help of use case scenarios led by an industry expert in big data Key Features Become wellversed with the core concepts of Apache Spark and Delta Lake for building data platforms Learn how to ingest, process, and analyze data that can be later used for training machine learning models Understand how to operationalize data models in production using curated data Book DescriptionIn the world of everchanging data and schemas, it is important to build data pipelines that can autoadjust to changes. This book will help you build scalable data platforms that managers, data scientists, and data analysts can rely on. Starting with an introduction to data engineering, along with its key concepts and architectures, this book will show you how to use Microsoft Azure Cloud services effectively for data engineering. You'll cover data lake design patterns and the different stages through which the data needs to flow in a typical data lake. Once you've explored the main features of Delta Lake to build data lakes with fast performance and governance in mind, you'll advance to implementing the lambda architecture using Delta Lake. Packed with practical examples and code snippets, this book takes you through realworld examples based on production scenarios faced by the author in his 10 years of experience working with big data. Finally, you'll cover data lake deployment strategies that play an important role in provisioning the cloud resources and deploying the data pipelines in a repeatable and continuous way. By the end of this data engineering book, you'll know how to effectively deal with everchanging data and create scalable data pipelines to streamline data science, ML, and artificial intelligence (AI) tasks. What you will learn Discover the challenges you may face in the data engineering world Add ACID transactions to Apache Spark using Delta Lake Understand effective design strategies to build enterprisegrade data lakes Explore architectural and design patterns for building efficient data ingestion pipelines Orchestrate a data pipeline for preprocessing data using Apache Spark and Delta Lake APIs Automate deployment and monitoring of data pipelines in production Get to grips with securing, monitoring, and managing data pipelines models efficiently Who this book is forThis book is for aspiring data engineers and data analysts who are new to the world of data engineering and are looking for a practical guide to building scalable data platforms. If you already work with PySpark and want to use Delta Lake for data engineering, you'll find this book useful. Basic knowledge of Python, Spark, and SQL is expected.
Publication Date2021-11-11

Data Engineering with Apache Spark, Delta Lake, and Lakehouse : Create scalable pipelines that ingest, curate, and aggregate complex data in a timely and secure way

Added to cartatc
Cart Total AED 201.00
Loading