Todd Havekost - 20 June 2016

With the recent release of “Alice Through the Looking Glass” (my wife is a huge Johnny Depp fan), it seems only appropriate to write on a subject epitomized by Alice’s famous words:

“What if I should fall right through the center of the earth … oh, and come out the other side, where people walk upside down?”  (Lewis Carroll, Alice in Wonderland)

Mainframe Peak Utilization

Along with the vast majority of the mainframe community, I had long embraced the perspective that running mainframes at high levels of utilization was essential to operating in the most cost-effective manner.

Based on carefully constructed capacity forecasts, our established process involved implementing just-in-time upgrades designed to ensure peak utilization’s remained slightly below 90%.

MSU Consumption Spikes

It turns out we’ve all been wrong. After implementing z13 processor upgrades and observing MSU consumption spike up sharply, I learned first-hand the dramatic impact processor cache utilization has on delivered capacity for z13 models.

When processor cache is optimized, our high-powered mainframe processors remain productive actively executing instructions, rather than unproductively burning cycles waiting for data and instructions to be staged into the Level 1 cache.

Increasing the amount of work executing on processors effectively dedicated to a single LPAR (“Vertical High CPs”) reduces the frequency of multiple LPARs with disparate workloads competing for the same processor cache, which is particularly detrimental to processor efficiency and throughput.

MLC Savings of Millions of Dollars

In my specific experience, operating at percent utilizations in the 30’s instead of the upper 80’s reduced MSU consumption by 30% (13,000 MIPS!), resulting in year-after-year Monthly License Charge (MLC) software savings of millions of dollars annually.

[You can download the presentation notes with more details on how this was accomplished here]. The economics of one-time hardware acquisitions creating “forevermore” annual software savings of this magnitude are readily apparent.

This experience ultimately turned all my concepts of mainframe capacity planning upside down, because processor cache operates far more effectively at lower utilization levels. Like Alice Through the Looking Glass, I’m now walking upside down with new insights!

 

MIPS vs. Transactions report from the presentation
Figure 1: MIPS vs. Transactions report from the presentation

 

Read my follow-up to this blog to learn how other mainframe sites have identified 7-figure annual MLC reduction savings.

Related

Webinar

An Insider's Guide to Tailored Fit Pricing for IBM Z

IBM's Tailored Fit Pricing model throws the R4HA/4HRA out the window. Peak hours and days are no longer a concern - your total MSU (CPU) consumption for the year, will determine your bill.

Watch Webinar
Blog

Recapping IBM TechU Atlanta 2019

Infrastructure performance and capacity planning professionals gathered last week in Atlanta for another exciting IBM TechU event filled with great opportunities to meet and hear from enterprise IT professionals and industry leaders.

Read more
Webinar

Black Friday in July? Assessing z/OS Infrastructure Performance Readiness

Even knowing when your biggest peaks are coming, it’s difficult to assess if your infrastructure can deliver the required availability. Learn performance strategies to handle your peak periods.

Watch Webinar

Go to Resources

How to use Processor Cache Optimization to Reduce z Systems Costs

Optimizing processor cache can significantly reduce CPU consumption, and thus z Systems software costs, for your workload on modern z Systems processors. This paper shows how you can identify areas for improvement and measure the results, using data from SMF 113 records.

This article's author

Todd Havekost
Senior z/OS Performance Consultant
Read Todd's bio

Share this blog

Subscribe to our Newsletter

Subscribe to our newsletter and receive monthly updates about the latest industry news and high quality content, like webinars, blogs, white papers, and more.