Okay, so it doesn’t quite have the same ring to it as “lions, tigers, and bears”, but one can try. As we are nearing the close of 2018, I wanted to reflect on the world of mainframe performance monitoring and RMF/SMF analysis and provide an ode to the topics that resonated most within the industry.
AIOps (or Artificial Intelligence for IT Operations for those of you who happened to miss a year’s worth of marketing) certainly took to the mainstream!
The term refers to utilizing advanced analytics technologies to directly and indirectly enhance IT operations functions with proactive insight. This is a far cry from approaches designed 30 years ago that require a significant amount of human expert time and knowledge to create and understand reports critical for performance.
We often no longer have to speak much to prospects about why they should modernize RMF and SMF reporting products designed 30 years ago with machine driven interpretation of what the data means. Kudos to Gartner for coining the term, and kudos to the market for catching on.
MLC optimization (reduction, tuning, etc.) might still be in the lead, however, in terms of market saturation. Companies are now fully aware how much MLC costs are affecting their bottom line and are no longer convinced there is nothing they can do about it.
From improving the 4 hour rolling average, to optimizing processor cache, there are now a slew of resources, solutions, and best practices available to help sites lower their MLC costs. Hopefully most of those practices don’t have a negative impact on performance (as they don’t need to).
If you happen to be a subscriber to the WatsonWalker Tuning letter (and if not, why not?), then be sure to check out their latest tuning letter (2018 No. 3) for an interesting article on how one company had a significant drop in MSU’s with numerous references to our very own MLC optimization expert, Todd Havekost.
Performance Monitoring for z/OS Mainframe
Performance monitoring, performance management, performance tuning. What’s in a name? Well in this case, a lot.
In essence, a real-time z/OS performance monitor, which is what many sites want, is too late. Even if you are right in front of your computer at the exact moment that the monitor alerts you to a service disruption, that still means there is a good chance that an end-user is experiencing application availability issues. (And hopefully this isn’t happening on Black Friday, Cyber Monday, or whenever your industry peak day is).
Going back to AIOps, modern solutions take advantage of the machine’s ability to automatically interpret the data to provide proactive (read: faster than real-time) insights into your application performance.
If you could choose between knowing the exact second something went wrong, or knowing about the issue a week in advance, which would you choose?
What is a “greatest hits” list without some honorary mentions? A few of these just nearly scrounged their way up into the top 3 and depending on which circles of mainframe performance you operate in, some of these topics may have been at the very tip-top of your list.
Db2 and CICS Performance Analytics
Performance monitoring for CICS and Db2 may have long been thought a decided deal, but it now appears otherwise. This year’s webinar on CICS and Db2 Performance Analytics blew the previous registration numbers for IBM Systems Magazine to Timbuktu.
This seems to indicate there is still a hefty demand for better and quicker access to the important data available in CICS and Db2 metrics. The problem is not needing more data, but rather, smarter data.
When IBM announced zHyperLink as a new mainframe technology designed to deliver extreme low I/O latency, we initially assumed that the trade-offs involving data center configurations would be a hurdle to implementation. Even with the expected response times coming in at 20 microseconds or less, zHyperLink requires direct point-to-point connectivity between processors and storage frames, which rules out the use of FICON directors.
However, the results from our webinar on zHyperLink benefits and limitations indicated that our assumptions were likely quite wrong. Our audience indicated they were motivated to overcome connectivity and replication implications of zHyperLink in order to resolve the larger-than-expected percentage of I/O latency issues.
This certainly bodes well for the near-term industry adoption of zHyperLink. View the results of our zHyperLink survey here.
Outsourcing Mainframe Performance Operations
For better or worse, outsourcing of mainframe performance operations continues to be a reality for many sites. Management leans towards the benefits; performance analysts warn of the risks.
From conversations many of our own consultants have had with other industry experts, mainframe outsourcing covers the entire spectrum. Some sites are beginning to consider outsourcing, and others are moving forward with their plans to outsource, while different sites are either pushing their outsourcing plans further into the future or turning 180 degrees and in-sourcing the mainframe operations again.
Clearly this is a touchy subject that isn’t likely to go away anytime soon. IntelliMagic partners at WatsonWalker and Edge Solutions joined forces to provide some key performance and capacity insights for outsourced sites, sites likely to be outsourced, and even for outsourcers.
2019: The Golden Year of Mainframe Performance?
For something that was ‘supposed to have died off years ago’ and has received death chants, the mainframe looks pretty much here to stay. There’s enough innovation and modernization happening to not only keep performance and capacity analysts employed, but to indeed help them thrive.
If the goal of the performance and capacity team is to deliver reliable service and availability, we may very well be entering the golden era. How will we know when we’re there? When the phone stops ringing about fires to put out, and instead you see the upcoming problem well in advance of the disruption, calmly fix it, and carry on with other important tasks all while working on your morning coffee, that will be a good sign.
z/OS Performance and Capacity Availability Intelligence
Protect the availability of your end-to-end z/OS Infrastructure with IntelliMagic’s AI-driven IT Operations Analytics (ITOA) solution for performance and capacity management: IntelliMagic Vision for z/OS.
IntelliMagic at SHARE Pittsburgh
A look back at SHARE Pittsburgh with links to all of the presentations and sessions we hosted.
Controlling MLC with Capping
‘Soft capping’ is a common technique to control software costs based on the R4HA. IBM offers two ‘flavors’: Defined Capacity (DC) and Group Capacity (GC).
MLC Pricing 101: Tailored Fit, Sub-capacity, and Capping
With the recent IBM announcement about Tailored Fit Pricing, it seems an opportune time to discuss the current pricing methods. In this first blog I’ll cover sub-capacity pricing.