Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (26-27 September 2022, Live Streaming Event)
Data Warehouse Automation & Real-time Data – Reducing Time to Value in a Distributed Analytical Environment
Centralised Data Governance of a Distributed Data Landscape (24-25 October 2022 – Live Streaming Event)
Centralised Data Governance of a Distributed Data Landscape (19-20 October 2022 – Live Streaming Event)
Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (24-25 November 2022 – Amsterdam)
Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (17-18 October 2022 – Live Streaming Event)
Smart Infrastructure & Smart Applications for the Smart Business – Infrastructure & Application Performance Monitoring
Centralised Data Governance of a Distributed Data Landscape (28-29 November 2022 – Live Streaming Event)
Tuesday, April 25, 2017
11 a.m. PT
2 p.m. ET
7 p.m. British Summer Time
Most big data is obscured. And just like finding something in a fog, you can only see it when you are near it or when it is so fresh that you remember exactly where it is. Whether your organization needs to perform big data analytics, comply with new data oriented regulations, or become more cost efficient by reducing the amount of redundant data in your organization, you need to know what data you have and where it is located.
The problem: proximity and freshness works for a very small amount of data. Meanwhile, the variety, volume, and velocity of data coming in continues to grow, and organizations become overwhelmed trying to make sense of it all.
Most organizations are relying on the “street light” method to find data, searching for it only where there’s already a streetlight shining—not necessarily where the data is located, so they don’t even know what data is available. This is where tribal knowledge traditionally comes in, but results are spotty. People forget. People leave. And people make mistakes. It’s critical to be able to quickly discover, understand and utilize your data to maintain a competitive advantage.
Join us for a highly interactive session as we will discuss:
- How companies can lift the data fog and keep it lifted so business users can more readily find critical data and convert it into actionable business intelligence on an ongoing basis
- Making automated tagging of data a part of the regular project workflow to kickstart the initial identification of data
- Ways to curate automated results through Subject Matter Expert review
- Maintaining the human element in the equation by retaining data stewards or analysts that can officially accept or reject a tag at any time
- Establishing trust when it comes to the classification of your data to support tighter control over accessing and provisioning
Moderator: Michael Ferguson
Managing Director, Intelligent Business Strategies, Ltd.
As an analyst and consultant, Mike Ferguson specialises in business intelligence, big data, data management and enterprise business integration. He has over 34 years of IT experience, and has spoken at events all over the world and written numerous articles.
Senior Director, Product Management, Waterline Data
Andrew Ahn is the Senior Director of Product for Waterline Data. He has over 12 years of experience in enterprise-scale big data. He is an Apache Atlas committer and was the lead at Hortonworks for Hadoop governance strategy, with product duties for Apache Atlas. Prior work includes Product and Governance responsibilities at ICE/NYSE Euronext, spanning 12 countries and 23 market centers.