Smart Infrastructure & Smart Applications for the Smart Business – Infrastructure & Application Performance Monitoring
Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (17-18 October 2022 – Live Streaming Event)
Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (24-25 November 2022 – Amsterdam)
Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (26-27 September 2022, Live Streaming Event)
Data Warehouse Automation & Real-time Data – Reducing Time to Value in a Distributed Analytical Environment
Centralised Data Governance of a Distributed Data Landscape (19-20 October 2022 – Live Streaming Event)
Centralised Data Governance of a Distributed Data Landscape (24-25 October 2022 – Live Streaming Event)
Centralised Data Governance of a Distributed Data Landscape (28-29 November 2022 – Live Streaming Event)
It’s pretty clear that Business Intelligence is becoming even more strategic in this tough economic climate as companies seek to have greater insight to help them stay profitable and keep making money. By default almost it would seem that trusted intelligence has got to be there for BI to be reliable and support confident decision making. Therefore on the eve of the Data Governance conference which starts next week (Feb 2-5) in London I thought I would put in a plug for Enterprise Data Governance. This is a fast growing topic and requires organisational change, processes and technology to manage it successfully. Structured data needs to be formally defined and named (a shared business vocabulary) and BI systems (data models, BI tool business views, reports etc) need to be changed to reflect these commonly understood data definitions.
In addition, data in disparate systems needs to be identified and mapped to the common definitions so as to gather knowledge on how to turn disparate data into trusted data. At this point you can then make sure that data integration is done in a fashion that creates trusted data for use in BI systems. Overall it is critical that you build modular data integration jobs (e.g. 1 data integration job per dimension) so that you can re-use data integration ‘services’ if needs be to guarantee trusted data every time. These days of course, data quality is built in to many data integration jobs and it is important to strive for this. When you have achieved this your data integration tool should provide a valuable set of metadata to support lineage should a user need to track data back to where it came from. Once your trusted data is available then you can monitor it to keep quality high and take action if quality deteriorates over time.
I’d be interested in your thoughts on enterprise data governance. Let me know what you are doing in this space.