Smart Infrastructure & Smart Applications for the Smart Business – Infrastructure & Application Performance Monitoring
Data Warehouse Automation & Real-time Data – Reducing Time to Value in a Distributed Analytical Environment
Enterprise DataOps – Curating Trusted Data as A Service from Data Lake to Data Marketplace (6-7 April 2020, Helsinki)
Unified Data Delivery from Data Lake to Enterprise Data Marketplace – Governing your Data across Hadoop, Cloud Storage, Data Warehouses, MDM & NoSQL Data Stores
Most organisations today are dealing with multiple silos of information. These include cloud and on-premises based transaction systems, multiple data warehouses, data marts, master data management (MDM) systems, NoSQL databases, cloud storage, Hadoop and content management systems. In addition, the number of data sources is increasing. It is not surprising that many companies are struggling with knowing what data is available, whether they can trust it and how they go about integrating it. Many have ended up managing information in silos with different tools being used to prepare and integrate data. In addition, both IT and business users are now integrating data and the danger is total chaos. The question is, do we let this continue or is there another way to govern and unify data across an increasingly complex data landscape that can shorten time to value?
This 2-day seminar looks at how a new DataOps approach proposes a new data architecture where data is ingested and organised in a data lake and IT and business users collaborate to unifying and build ready-made trusted data assets that are published in a data marketplace and available to others to consume and use to drive value. The objective is to reduce time to value, avoid reinvention while governing and unifying data quickly in a multi-cloud, multiple data store, hybrid computing environment.
This seminar is intended for business data analysts doing self-service data integration, data architects, chief data officers, master data management professionals, database administrators, big data professionals, data integration developers, and compliance managers who are responsible for data management. This includes metadata management, data integration, data quality, master data management and enterprise content management. The seminar is not only for ‘Fortune 500 scale companies’ but for any organisation that has to deal with Big Data, small data, multiple data stores and multiple data sources. It assumes that you have an understanding of basic data management principles as well as a high level of understanding of the concepts of data migration, data replication, metadata, data warehousing, data modelling, data cleansing, etc.
Attendees will learn:
- How to define a strategy for producing trusted data as-a-service in a distributed environment of multiple data stores and data sources
- How to organise data in a centralised or distributed data environment to overcome complexity and chaos
- How to design, build, manage and operate a logical or centralised data lake within their organisation
- The critical importance of an information catalog in understanding what data is available as a service
- How data standardisation and business glossaries can help make sure data is understood
- An operating model for effective distributed information governance
- What technologies and implementation methodologies they need to get their data under control and produce ready-made trusted data products
- Collaborative curation of trusted, ready-made data products and publishing them in a data marketplace for people to shop for data
- How to apply methodologies to get master and reference data, big data, data warehouse data and unstructured data under control irrespective of whether it be on-premises or in the cloud.
- Fuelling rapid ‘last mile’ analytical development to reduce time to value