What Embedded Expert Knowledge means and offers?
All the different platforms that IntelliMagic supports for SAN and z/OS environments generate data about their configuration, capacity, activity and performance. Just collecting this raw data and presenting it in a database or set of graphs does not unlock the value that is implicitly present in the data.
The way that this raw data is turned into meaningful information is what makes IntelliMagic unique. We investigated the architecture and all the metrics in the raw data extensively and embedded our knowledge gained over the past 25+ years into the software.
Different From Statistics
This is really different from standard IT Operation Analytics (ITOA) solutions that use statistical anomaly detection. Detecting statistical outliers can be useful, but the weakness of the statistical approach is that it does not predict upcoming issues or show root causes. Using correlation rather than interpretation does not distinguish cause and effect or explain why an anomaly occurs. Another problem with anomaly detection is that it often generates many false alerts that make you hunt issues that are not real.
IntelliMagic software has a built-in fundamental understanding of how workloads, logical concepts and physical hardware interact. By embedding human expertise in the software, the full potential of the data is unlocked. The software detects risks before issues impact production. It also enables users to find true root causes quickly and reliably. Furthermore, it highlights where there is tangible potential for optimization, and ultimately it gives IT staff what they need to be most effective in delivering a reliable datacenter at an optimal cost level.
How Embedded Expert Knowledge is implemented in IntelliMagic software
Our expert knowledge is embedded on two levels:
- Data to information phase – When processing the raw data that comes from the SAN or z/OS systems
- Information to intelligence phase – When evaluating the information
Data to Information
The following are some ways in which our knowledge is embedded into our software in the ‘data to information’ phase:
- which data sources deliver the necessary information,
- which metrics contain valuable information and which ones are superfluous or meaningless,
- what the exact meaning of a certain metric is for each different platform,
- how metrics should be normalized against which other metric,
- which metrics may be combined to provide brand new information.
This might be mostly invisible, but this type of knowledge is a critically important prerequisite to using the more visible types of embedded knowledge that produce the dashboards, thresholds, ratings, drill downs, recommendations and explanations.
Information to Intelligence
In the ‘information to intelligence’ stage we further interpret the information with additional embedded expert knowledge to create true Availability Intelligence.
Here are some examples of embedded expert knowledge used to turn information into intelligence:
- utilization levels for internal components that are not measured, computed based on knowledge of the architectural throughput limits,
- which of the vast amount of metrics are the most important to put in a dashboard to monitor for hidden issues,
- which type of visualization is the most applicable for each metric and detail level,
- what performance you should expect for a particular combination of workload and configuration, resulting in dynamic thresholds that consider the hardware and workload interaction,
- what sort of thresholds are relevant for each metric, e.g. fixed thresholds or workload dependent,
- what default levels of the thresholds should be for different configurations,
- how thresholds should be configurable by the user,
- for which metrics it is relevant to set Service Level Objectives,
- what it means when a performance value is outside of the safe range, displayed in explanation fields that points to potential root causes,
- the potential causes of the exceptions, allowing users to further drill down to find the root cause,
- which other metrics to look at when looking at a certain metric, as presented in multicharts,
- how to drill down to relevant connected information from each certain point onwards.
Learn more and see examples?
If you want to know more details and see examples of how this results in superior availability and reduced cost levels, make an appointment for a free demonstration.
IntelliMagic Vision: Intelligent Storage Performance and Capacity Management at DATEV eG
DATEV eG relies on IntelliMagic Vision for performance analysis and capacity planning for disk storage and virtual tape systems.
Diagnosing Host Performance Issues in HDS VSP Environments
Diagnosing host performance issues in your HDS VSP environment can sometimes be a bit tricky. This video demonstrates an easier way to diagnose these kinds of performance issues.
Recapping IBM TechU Atlanta 2019
Infrastructure performance and capacity planning professionals gathered last week in Atlanta for another exciting IBM TechU event filled with great opportunities to meet and hear from enterprise IT professionals and industry leaders.