This the last blog post in a series of four, where we share our experience with the instrumentation that is available for the IBM DS8000, EMC VMAX and HDS VSP or HP XP P9500 storage arrays through RMF and SMF. This post is about the Hitachi high-end storage array that is sold by HDS as the VSP and by HP as the XP P9500.
RMF has been developed over the years by IBM, based on IBM storage announcements. Even for the IBM DS8000, not nearly all functions are covered; see “What IBM DS8000 Should Be Reporting in RMF/SMF – But Isn’t” blog post. For the other vendors it is harder still – they will have to make do with what IBM provides in RMF, or create their own SMF records.
Hitachi has supported the RMF 74.5 cache counters for a long time, and those counters are fully applicable to the Hitachi arrays. For other RMF record types though, it is not always a perfect match. The Hitachi back-end uses RAID groups that are very similar to IBM’s. This allowed Hitachi to use the RMF 74.5 RAID Rank and 74.8 Link records that were designed for IBM ESS. But for Hitachi arrays with concatenated RAID groups not all information was properly captured. To interpret data from those arrays, additional external information from configuration files was needed.
With their new Hitachi Dynamic Provisioning (HDP) architecture, the foundation for both Thin Provisioning and automated tiering, Hitachi updated their RMF 74.5 and 74.8 support such that each HDP pool is reflected in the RMF records as if it were an IBM Extent Pool. This allows you to track the back-end activity on each of the physical drive tiers, just like for IBM.
This does not provide information about the dynamic tiering process itself, however. Just like for the other vendors, there is no information per logical volume on what portion of its data is stored on each drive tier. Nor are there any metrics available about the migration activity between the tiers.
Overall, we would like to see the following information in the RMF/SMF recording:
- Configuration data about replication. Right now, you need to issue console or Business Continuity Manager commands to determine replication status. Since proper and complete replication is essential for any DR usage, the replication status should be recorded every RMF interval instead.
- Performance information on Universal Replicator, Hitachi’s implementation of asynchronous mirroring. Important metrics include the delay time for the asynchronous replication, the amount of write data yet to be copied, and the activity on the journal disks.
- ShadowImage, FlashCopy and Business Copy activity metrics. These functions provide logical copies that can involve significant back-end activity which is currently not recorded separately. This activity can easily cause hard-to-identify performance issues, hence it should be reflected in the measurement data.
- HDP Tiering Policy definitions, tier usage and background migration activity. From z/OS, you would want visibility into the migration activity, and you’d want to know the policies for a Pool and the actual drive tiers that each volume is using.
Unless IBM is going to provide an RMF framework for these functions, the best approach for Hitachi is to create custom SMF records from the mainframe component that Hitachi already uses to control the mainframe-specific functionality.
It is good to see that Hitachi works to fit their data in the framework defined by RMF for the IBM DS8000. Yet we would like to see more information from the HDS VSP and HP XP P9500 reflected in the RMF or SMF records.
So when considering your next HDS VSP or HP XP P9500 purchase, also discuss the need to manage it with the tools that you use on the mainframe for this purpose: RMF and SMF. If your commitment to the vendor is significant, they may be responsive.
Modernize Db2 SMF Analysis with Smarter Algorithms
The Db2 SMF data is a rich and voluminous source of metrics, but the size and complexity make it difficult to know when the metrics start to get into conditions that represent problems, risk, and best-practice violations.
IntelliMagic Sessions at IDUG Charlotte 2019
After releasing our support for Db2 for z/OS earlier this year, we decided to sponsor the North American IDUG conference and the response was overwhelmingly positive.
An Insider's Guide to Tailored Fit Pricing for IBM Z
IBM's Tailored Fit Pricing model throws the R4HA/4HRA out the window. Peak hours and days are no longer a concern - your total MSU (CPU) consumption for the year, will determine your bill.
Subscribe to our Newsletter
Subscribe to our newsletter and receive monthly updates about the latest industry news and high quality content, like webinars, blogs, white papers, and more.