Enterprise-wide reporting across the IT stack and IT domains is a nightmare to develop and maintain.
In 2009, I developed a storage chargeback system for a customer using 17 different data sources. The data collection was mostly automated but admittedly it wasn’t very robust. The data was from different sources including native CLI, manual input, database queries, and automatically generated reports. It was fairly modest but effective, however it took a part-time developer 20 to 40 hours per month to generate the reports and maintain the system after I turned it over.
Even within the storage domain there wasn’t one single interface for communicating with the devices. Other systems, such as change management and asset management databases, had standard SQL query interfaces but some systems were inaccessible and required someone to update a flat file.I was hopeful and perhaps somewhat naive in expecting that this type of challenge would be resolved over the last couple of years. As I speak with more and more customers, I find that it still exists and it hasn’t gotten simpler. There are hundreds of disparate data sources in IT that cross infrastructure silos and business domains (chargeback, capacity, storage management server management, etc). Each of these domains has different objectives, different data sources, and different areas of technical expertise. What they have in common is overlap in the data that they need to consume.
On the reporting side there are a number of solutions that may reduce the nightmare:
- Relational database querying tools. There are a number of tools that can cross multiple relational databases to perform common reporting. One of IntelliMagic’s partners, TeamQuest, has an excellent solution for this called Surveyor (check it out at https://www.teamquest.com/products-services/teamquest-performance-software/teamquest-surveyor/). This type of solution goes a long way towards easing Enterprise-wide IT reporting.
- Another possible solution is something like Splunk. This seems to be a pretty flexible solution that can consume data of mixed file types.
- Custom or off-the-shelf database extract, transform & load (ETL) solutions.
While these solutions all dramatically reduce manual reporting efforts, the data collection problem has largely not been addressed.
Why after 25 years of widespread mid-range computing do we not have a well-deployed standard interface for instrumenting all the layers of the IT stack: server, application, database, web application server, storage, network? While I applaud the IT industry’s technological innovations over the last 40 years, the industry has dropped the ball on standards. Perhaps industry should not be expected to focus on differentiation and standardization at the same time!
SNMP is probably the most widely implement standard for managing IT infrastructure but it is woefully inadequate for performance and configuration management. CIM Operations over HTTP (https://www.dmtf.org/sites/default/files/standards/documents/DSP0200_1.3.1.pdf) describes an abstracted, object oriented API that seems like a more viable option. It is managed by the Distributed Management Task Force (DMTF). I spent some time trying to find support for configuration and performance statistics on various platforms and the only thing that was clear is that it is supported on z/OS V1R12.
So is there hope? SMI-S (https://snia.org/tech_activities/standards/curr_standards/smi) is an excellent industry standard API for managing storage systems, fabric and HBA components. It is based upon CIM Operations over HTTP and includes specifications for managing all aspects of the storage, fabric and HBA operations including the gathering of performance, capacity and chargeback information. In comparison to the implementations available for other standards in the IT stack: host, database, and applications, SMI-S seems more mature. Applications have ARM (https://collaboration.opengroup.org/tech/management/arm/), but databases and hosts have no standard that I am aware of.
Microsoft (Windows) and Redhat (Linux) are now also investing in SMI-S, primarily from a host based SAN management perspective. Perhaps there will be some expansion of the standard in the space of server logical and physical resources. This would be great for customers, particularly the IT measurement and reporting communities that lack a common way to consume measurement and resource data.
Standards move forward when enough customers push on the industry. The problem is too big for a single customer, vendor or standard body, but we can’t continue to manage IT assets like we have been in the past. There are too many moving pieces and too few resources. Standards enable more efficient IT management. They reduce tools and labor costs. They allow everyone to focus on their expertise as opposed to getting caught up maintaining unsustainable solutions. End-users can start by asking their vendors to add or improve standard support, such as SMI-S, in the storage area networking space and CIM agents for their OS and application platforms.
Best Practices for Managing your SAN Performance (Part 3: Planning)
Within infrastructure capacity management it is important that we consider growth to help us understand future costs for budgeting purposes.
Best Practices for Managing your SAN Performance (Part 2: Reactive)
As a SAN administrator your job is to provide applications with access to fast and reliable SAN storage. Here are some best practices to ensure these goals are achieved.
Best Practices for Managing your SAN Performance (Part 1: Proactive)
Over the years I have learned the hard way that spending a little bit of time proactively assessing the health of the SAN environment is worth a thousand hours of reactive problem management.
Subscribe to our Newsletter
Subscribe to our newsletter and receive monthly updates about the latest industry news and high quality content, like webinars, blogs, white papers, and more.