The last decade has seen a storage virtualization revolution just as profound as what has happened in the server world. In both cases virtualization enables logical view and control of physical infrastructure with the goal of optimization, simplification and greater utilization of physical resources. These are lofty goals, but there is a fundamental difference between server and storage virtualization. When a virtual server needs compute resources for an application, there is little limitation on which specific resources may be used other than maximum usage caps. With storage resources, once data is placed on a particular storage media, an application is tied to the portion of the infrastructure that contains that media and usage caps don’t apply. Thus, I believe performance management of virtualized storage is intrinsically more difficult and important than managing performance of virtualized servers. In fact, George Trujillo states in a recent entry in his A DBA’s Journey into the Cloud blog that “statistics show over and over that 80% of issues in a virtual infrastructure are due to the design, configuration and management of storage”.
Ok, so maybe virtual storage is harder to manage. If you are a storage performance analyst you can pat yourself on the back and look down your noses at the server analysts. However, that does not help you solve your virtual problems in the real world. The bottom line is you need to make sure applications are meeting their SLAs and that your storage delivers high quality (fast) I/Os. But virtualization can get in the way. Whether you are talking traditional storage arrays like EMC VMAX, IBM DS8000 or HDS VSP or virtualization appliances like IBM SVC, HDS UDS or EMC VPLEX, there is indeed virtualization going on. The storage arrays take physical drives and create raid arrays from which logical volumes are created. Virtualization appliances add another level of indirection by taking those already virtualized volumes and pooling them to create an even “more virtualized” set of volumes. I liken this to the different dream state levels in the movie “Inception” where people would have dreams within dreams within dreams. And performance issues can quickly turn the dream into a nightmare! But the value proposition of virtualization is so compelling that a few difficulties managing performance is no reason to avoid it. Instead we need to confront reality head-on.
In a virtual storage environment, the performance objective is high quality I/O – this means that the I/O is fast enough to meet SLAs. Although this is usually measured at the virtualized volume level, the volume level statistics alone are insufficient for monitoring performance of the storage. This is sort of like monitoring your health with a thermometer. Sure, if you have a fever you might be sick, but you need to look further to determine what is wrong and how to fix it. And you could be quite sick and not have any hint of a fever. To manage virtualized storage performance effectively, you need to be able to see the performance and utilization of the underlying physical resources that provision the virtualized volumes. Data on back end ports, drives, adapters fabric connectivity, auto-tiering, replication and I/O priority are all very relevant here. Without this level of insight into the infrastructure you can’t hope to truly understand bottlenecks and how to fix them.
In a prior life I spent a number of years as a manufacturing engineer. My job was to deliver quality products at a high yield (think fast I/Os). We could measure the end product to see if it met the specified tolerances (are the I/Os fast enough?). But that was completely insufficient to manage the yield. For that we had “process control”. We had process specifications for each step and if all the processes were “in spec” the product would generally be good. Thus, if we monitor the underlying physical infrastructure and take action to fix things that are “out of spec” we can go a long way towards assuring high quality I/O. For example if we know that none of the drives, adapters, ports and fabric are over utilized, we can be fairly confident that we won’t run into performance issues. With the right tools, you can do this and keep the virtual world from interrupting your real sleep! IntelliMagic Vision is one example of a tool that can help you sleep better. For more information, go to www.intellimagic.com.
Best Practices for Managing your SAN Performance (Part 3: Planning)
Within infrastructure capacity management it is important that we consider growth to help us understand future costs for budgeting purposes.
Best Practices for Managing your SAN Performance (Part 2: Reactive)
As a SAN administrator your job is to provide applications with access to fast and reliable SAN storage. Here are some best practices to ensure these goals are achieved.
How to Create a Network Operations Center (NOC) for Your SAN Infrastructure
IntelliMagic Vision is a great Network Operations Center (NOC) solution that allows you to monitor the performance and capacity health of your entire SAN infrastructure in a single pane of glass.
Subscribe to our Newsletter
Subscribe to our newsletter and receive monthly updates about the latest industry news and high quality content, like webinars, blogs, white papers, and more.