Lee LaFrese -

The last decade has seen a storage virtualization revolution just as profound as what has happened in the server world.  In both cases virtualization enables logical view and control of physical infrastructure with the goal of optimization, simplification and greater utilization of physical resources.  These are lofty goals, but there is a fundamental difference between server and storage virtualization.  When a virtual server needs compute resources for an application, there is little limitation on which specific resources may be used other than maximum usage caps.  With storage resources, once data is placed on a particular storage media, an application is tied to the portion of the infrastructure that contains that media and usage caps don’t apply.  Thus, I believe performance management of virtualized storage is intrinsically more difficult and important than managing performance of virtualized servers.  In fact, George Trujillo states in a recent entry in his A DBA’s Journey into the Cloud blog that “statistics show over and over that 80% of issues in a virtual infrastructure are due to the design, configuration and management of storage”.

Ok, so maybe virtual storage is harder to manage.  If you are a storage performance analyst you can pat yourself on the back and look down your noses at the server analysts.  However, that does not help you solve your virtual problems in the real world.  The bottom line is you need to make sure applications are meeting their SLAs and that your storage delivers high quality (fast) I/Os.  But virtualization can get in the way.  Whether you are talking traditional storage arrays like EMC VMAX, IBM DS8000 or HDS VSP or virtualization appliances like IBM SVC, HDS UDS or EMC VPLEX, there is indeed virtualization going on.  The storage arrays take physical drives and create raid arrays from which logical volumes are created.  Virtualization appliances add another level of indirection by taking those already virtualized volumes and pooling them to create an even “more virtualized” set of volumes.  I liken this to the different dream state levels in the movie “Inception” where people would have dreams within dreams within dreams.  And performance issues can quickly turn the dream into a nightmare!  But the value proposition of virtualization is so compelling that a few difficulties managing performance is no reason to avoid it.  Instead we need to confront reality head-on.

In a virtual storage environment, the performance objective is high quality I/O – this means that the I/O is fast enough to meet SLAs.  Although this is usually measured at the virtualized volume level, the volume level statistics alone are insufficient for monitoring performance of the storage.  This is sort of like monitoring your health with a thermometer.  Sure, if you have a fever you might be sick, but you need to look further to determine what is wrong and how to fix it.  And you could be quite sick and not have any hint of a fever.  To manage virtualized storage performance effectively, you need to be able to see the performance and utilization of the underlying physical resources that provision the virtualized volumes.  Data on back end ports, drives, adapters fabric connectivity, auto-tiering, replication and I/O priority are all very relevant here. Without this level of insight into the infrastructure you can’t hope to truly understand bottlenecks and how to fix them.

In a prior life I spent a number of years as a manufacturing engineer.  My job was to deliver quality products at a high yield (think fast I/Os).  We could measure the end product to see if it met the specified tolerances (are the I/Os fast enough?).  But that was completely insufficient to manage the yield.  For that we had “process control”.  We had process specifications for each step and if all the processes were “in spec” the product would generally be good.  Thus, if we monitor the underlying physical infrastructure and take action to fix things that are “out of spec” we can go a long way towards assuring high quality I/O.  For example if we know that none of the drives, adapters, ports and fabric are over utilized, we can be fairly confident that we won’t run into performance issues.  With the right tools, you can do this and keep the virtual world from interrupting your real sleep! IntelliMagic Vision is one example of a tool that can help you sleep better.  For more information, go to www.intellimagic.com.

This article's author

Lee LaFrese
Senior Storage Performance Consultant
Read Lee's bio

Related Resources

Blog

How to Detect and Resolve “State in Doubt” Errors

IntelliMagic Vision helps reduce the visibility gaps and improves availability by providing deep insights, practical drill downs and specialized domain knowledge for your SAN environment.

Read more
Blog

Finding Hidden Time Bombs in Your VMware Connectivity

Seeing real end-to-end risks from the VMware guest through the SAN fabric to the Storage LUN is difficult, leading to many SAN Connectivity issues.

Read more
Blog

Top ‘IntelliMagic zAcademy’ Webinars of 2023

View the top rated mainframe performance webinars of 2023 covering insights on the z16, overcoming skills gap challenges, understanding z/OS configuration, and more!

Read more

Go to Resources

Request a Free Trial or Schedule a Demo Today

Discuss your technical or sales-related questions with our availability experts today