Significant obstacles to implementing zHyperLink
Sure, the published response time expectations for zHyperLink are amazing: 20 microseconds or less. For those of us who have been in the mainframe industry for years, that is a remarkable number. Without a doubt, the far-reaching changes across the hardware and software stack required to pull this off represent a phenomenal technological accomplishment.
But achieving this blazing response time requires some very significant trade offs, specifically for data center configurations and replication solutions. In the data center, zHyperLink requires direct point-to-point connectivity between processors and storage frames. That rules out the use of FICON directors, which have greatly simplified data center connectivity from the old “bus and tag cable” days. In addition, zHyperLink limits the maximum distance between processor and storage frame to 150 meters.
When it comes to replication, many of today’s strategies rely heavily on synchronous replication between storage frames to ensure continuous availability despite storage frame failures and data center environmental incidents. But synchronous replication is not supported with zHyperLink since the time required for the frame-to-frame communication protocol to replicate writes would exceed its service time “budget”.
Impact of significant obstacles hindering adoption of zHyperLink – less than anticipated
With these significant data center configuration and replication strategy obstacles, many (including this writer) anticipated a very slow adoption rate. After all, how many sites are experiencing business workload constraints arising from I/O latency to the extent that it would warrant the “pain” associated with overcoming the obstacles described above? To the extent that the attendees of our recent IntelliMagic zHyperLink webinar are representative of the industry, zHyperLink adoption may be more rapid than anticipated.
Question 1: Impact of I/O latency
We asked our webinar audience two questions, to which more than 70 responded. The first pertained to the degree to which I/O latency was hindering business volume growth. After all, if you don’t or won’t have the disease, there is no need to endure a painful procedure from your doctor.
As you can see from this chart, well over half the respondents in this webinar indicated that I/O latency is now or will soon be hindering volume growth of business-critical workloads in their environments. This differs from the common perception that I/O issues are a problem of the past. Despite all the improvements that have been made in I/O performance, including the recent proliferation of flash storage, more than half of these respondents remain concerned about I/O performance.
Since many of these sites are facing the type of I/O constraints that zHyperLink is designed to relieve, perhaps they would be willing to expend the effort required to overcome the previously described data center and replication hurdles.
Question 2: Willingness to “push through the pain” to “achieve the gain”
That was exactly the scope of the second question. How significant did our audience believe those connectivity and replication obstacles would be to implementing zHyperLink at their sites?
The response to this question was even more favorable to zHyperLink adoption. More than 4 of 5 respondents indicated that the connectivity and replication obstacles could or would be overcome.
Looking ahead to zHyperLink adoption
Though today only Db2 reads are eligible and will only be successful zHyperLink “sync I/Os” if they are cache hits, software and hardware enhancements are expected to significantly expand the subset of I/Os that can benefit from zHyperLink in the near future. But today’s connectivity and replication limitations are likely to remain in place. The fact that this audience was motivated to overcome those restrictions (due to the need to overcome I/O latency constraints), and was optimistic that overcoming those restrictions was feasible, bodes well for near-term industry adoption of zHyperLink.
If you haven’t already, check out our webinar on the topic of zHyperLink, or read our white paper, zHyperLink: The Holy Grail of Mainframe IO?
Profiling zHyperLink Performance and Usage
In this blog, we demonstrate how to profile zHyperLink performance and usage by reviewing one mainframe site’s recent production implementation of zHyperLink for reads, for Db2.
How A Db2 Newbie Quickly Spotlights the Root Cause of a Db2 Slowdown
This blog explores how a non-Db2 expert quickly identified latch contention arising from a Data Sharing Index Split as the root cause of a Db2 delay/slowdown.
Integrating Dataset Performance (SMF 42.6) and Db2 Buffer Pools & Databases (SMF 102/IFCID 199) Data
Dataset performance data from SMF 42.6 records provides disk cache and response time data at the Db2 buffer pool and Db2 database levels when integrated with Db2 Statistics IFCID 199 data.
Subscribe to our Newsletter
Subscribe to our newsletter and receive monthly updates about the latest industry news and high quality content, like webinars, blogs, white papers, and more.