Creating Data Products in a Data Mesh, Data Lake or Data Lakehouse for use in Analytics

Request information on running this seminar as an Onsite  (can be given as Live Streaming training)

 

Overview

Most companies today are storing data and running applications in a hybrid multi-cloud environment.  Analytical systems tend to be centralised and siloed like data warehouses and data marts for BI, Hadoop or cloud storage data lakes for data science and stand-alone streaming analytical systems for real-time analysis.  These centralised systems rely on data engineers and data scientists working within each silo to ingest data from many different sources, clean and integrate it for use in a specific analytical system or machine learning models.  There are many issues with this centralised, siloed approach including multiple tools to prepare and integrate data, reinvention of data integration pipelines in each silo and centralised data engineering with poor understanding of source data unable to keep pace with business demands for new data. Also, master data is not well managed.

To address these issues, new data architectures have emerged attempting to accelerate creation of data for use in multiple analytical workloads.  Data Mesh is a decentralised data architecture with domain-oriented data ownership and decentralised self-service data engineering to create a mesh of data products serving multiple analytical systems. Also, Data Lakes can be used for the same thing and   integrated with Data Warehouses or Lakehouses so lower latency data products can be created once and used in streaming analytics, business intelligence, data science and other analytical workloads .

This 2-day class examines the strengths, and weaknesses of data lakes, data mesh and data lakehouses and at how multiple domain-oriented teams can use common data infrastructure software to create trusted, compliant, reusable, data products in a Data Mesh or Data Lake for use in data warehouses, data lakehouses and data science to drive value.  The objective is to shorten time to value while also ensuring that data is correctly governed in a decentralised environment. It also looks at the organisational implications of these architectures and how to create sharable data products for master data management AND for use in multiple analytical workloads. Technologies discussed includes data catalogs, self-service data integration, Data Fabric, DataOps, data warehouse automation, data marketplaces and data governance platforms.

Audience

This seminar is intended for business data analysts, data architects, chief data officers, master data management professionals, data scientists, IT ETL developers, and data governance professionals.  It assumes you understand basic data management principles and data architecture plus a reasonable understanding of data cleansing, data integration, data catalogs, data lakes and data governance.

Learning Objectives

Attendees will learn about:

  • Strengths and weaknesses of centralised data architectures used in analytics
  • The problems caused in existing analytical systems by a hybrid, multi-cloud data landscape
  • What is a Data Mesh a Data Lake and a Data Lakehouse? What benefits do they offer?
  • What are the principles, requirements, and challenges of implementing these approaches?
  • How to organise to create data products in a decentralised environment so you avoid chaos
  • The critical importance of a data catalog in understanding what data is available as a service
  • How business glossaries can help ensure data products are understood and semantically linked
  • An operating model for effective federated data governance
  • What common data infrastructure software is required to operate and govern a Data Mesh, a Data Lake or a Data Lakehouse?
  • An Implementation methodology to produce ready-made, trusted, reusable data products
  • Collaborative domain-oriented development of modular and distributed DataOps pipelines to create data products
  • How a data catalog and automation software can be used to generate DataOps pipelines
  • Managing data quality, privacy, access security, versioning, and the lifecycle of data products
  • Publishing semantically linked data products in a data marketplace for others to consume and use
  • Consuming data products in an MDM system
  • Consuming and assembling data products in multiple analytical systems to shorten time to value

Click here for the full brochure

Back