Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (17-18 October 2022 – Live Streaming Event)
Centralised Data Governance of a Distributed Data Landscape (24-25 October 2022 – Live Streaming Event)
Centralised Data Governance of a Distributed Data Landscape (19-20 October 2022 – Live Streaming Event)
Smart Infrastructure & Smart Applications for the Smart Business – Infrastructure & Application Performance Monitoring
Data Warehouse Automation & Real-time Data – Reducing Time to Value in a Distributed Analytical Environment
Creating Data Products in a Data Mesh, Data Lake or Lakehouse for use in Analytics (24-25 November 2022 – Amsterdam)
Over the last 30+ years I have been in the industry, I can’t remember a time when the IT infrastructure and application landscape of most of my clients has been so complicated. Today we have hardware servers, clusters, virtual servers, storage systems, file systems, databases servers, virtual storage, content management systems, application servers, cloud-based applications, IVR servers, web servers, network infrastructure, virtual networks, security servers, data integraton tools, Big data platforms, BI servers, data warehouses, data marts, admin tools, desktop applications, mobile clients, mobile device management tools and data that is increasingly becoming distributed. Oh, and I am sure that is not an exhaustive list.
Looking at this, it is not surprising that people are pulling their hair out trying to manage it. Making sense of todays application landscape and IT infrastructure is a nightmare. However, all hope is not lost because many of these application and infrastructure components come with thier own logs which work frantically in the background recording every activity, every error, every click, every login etc. etc. For many years, this ‘digital exhaust data’ has just sat there gathering dust. At best all this log data ends up being archived and logs backed up to low cost storage every night. At worst, it is discarded and never used.
Yet it turns out that dormant ‘machine data’ is hugely valuable if people thought to load it up, integrate it and analyse it. It provides insights into:
- What parts of the IT infrastructure are in melt-down?
- What is underutilised?
- What applications are heavily used?
- How much resource does an application use?
- Which transactions are used the most?
- Which transactions provide the most business value?
- Who violated security?
- Who accessed this sensitive data and when?
- What are the most dominant paths taken by users when using an application?
- What application transactions and functionality are used in a process?
- How log does it take to perform a transaction versus how long should it take?
- Etc., etc.
Taken together, all the above sources of machine data across the application and IT infrastructure landscape adds up to a lot of data. However, the emergence of Big Data platforms like Hadoop and advanced analytics are making it possible to load all this data into one place and analyse it. It is not surprising therefore that people are catching on to what they can do with machine data. New markets is emerging to monitor application performance and monitor IT infrastructure performance. With CFO’s putting CIO’s under pressure to cut costs and optimise IT investment, is it any wonder that CIOs are interested? I would be! Ideally what CIO’s want is self-optimising infrastructure and self-optimising applications. Sounds impossible right? Wrong! If you are interested in this check out my new market brief on Smart Infrastructure and Smart Applications for the Smart Business to see who the players are in the Application Performance Monitoring market and how you can start to move forward in optimising your applications and IT infrastructure.