Here is the evolution of shared storage in many organizations as virtualization takes hold:
- Server Guy: I'm beginning to look into virtualization. It could be game-changing.
- Storage Guy: Have fun. Let me know how that goes.
- Server Guy: Virtualization is going to save us tons in hardware and infrastructure, and it may even improve our uptime. Do you have some shared storage I can tap into for some advanced features?
- Storage Guy: Yeah, we have some Tier 1 storage for the DBAs. I'll carve some out for you.
- Server Guy: Virtualization is our new standard, so I'll be asking for some more storage.
- Storage Guy: Let me know what you need, and I'll have it for you.
- Server Guy: These are critical workloads, with a lot of them on each LUN, so keep the Tier 1 storage coming.
- Storage Guy: I'll add some more storage to get us to the storage refresh next year.
- CIO: We need to do a storage refresh. How much will it cost to replace all of our storage?
- Storage Guy: Uh, we are a little bigger than we were three years ago at our last refresh. Maybe a lot bigger. You may want to sit down for this…
From there, the conversation can become very uncomfortable. A little analysis and you discover that you are only using a fraction of your available storage performance and you have vast amounts of unused storage locked inside of VMDK files throughout your environment.
Evaluating your storage situation
How you got here is obvious and not at all uncommon. You were distracted by dollar signs and infrastructure savings. Storage usage and the associated costs snuck up on you. Like so many others, you were caught up in overestimating your storage performance needs and placing too much emphasis on Tier 1 storage.
How can you develop a better storage strategy for the next three years? Find a good tool to measure your actual storage performance needs, and buy the right mix of storage to meet these needs. EMC's Fully Automated Storage Tiering, Compellent's Fluid Data Storage and other tools can help with dynamic storage tiering features, but you still need the right mix of resources for these services to move data between.
In other words, they help you be more efficient with your usage and data placement, but you still need to know how much of each tier to buy. You should also evaluate the cost-to-performance ratios for Fibre Channel, Network File System, iSCSI and Fibre Channel over Ethernet storage to determine the right mix of protocols to use within your environment.
Developing a storage strategy
Next, reclaim unused disk space using thin provisioning -- either through the hypervisor in VMware vSphere, or at the storage level with tools from NetApp, EMC and other vendors. Thin provisioning tools tell you that all of the storage space you requested is available, but they only give you as much as you currently need.
Say you requested 20GB of storage but are only storing 7GB of data. In this case, thin provisioning only gives you 7GB of actual storage on the array. The array will give you more as you begin to tap into the remaining 13GB. In essence, you do not get the storage until you are ready to use it, leaving all the free space on the disk array, which can yield some great returns.
If you want to really attack your storage costs, look into data deduplication. Thin provisioning reclaims unused space, but data deduplication actually reduces the footprint of the data being written to your shared storage.
My final bit of advice is to set a lower default level of storage. Many people will default to Tier 1, then look for lower-priority items to move to Tiers 2 and 3. I would recommend flipping that practice. Default all storage to Tier 2, and only move items to Tier 1 or 0 if they can prove that Tier 2 is not meeting their needs.
It's never too late
If you are at the beginning of a virtualization deployment, step back and get strategic about storage. If you are at Day 400, look out. Get ahead of any potential issues and begin correcting course.
And if you are at Day 500, welcome to the crowd. Look at the past long enough to learn form it, then let those battle scars help you chart a path to a better solution. It is never too late to get on track, and there are some innovative vendors developing new technologies and providing options that did not even exist a year ago.
About the expert
Mark Vaughn (MBA, VCP, vExpert, BEA-CA) is a consulting principal for data center virtualization with INX, a Houston-based solutions provider. Vaughn has more than 14 years of experience in IT as a Unix administrator, developer, Web hosting administrator, IT manager and enterprise architect. For several years he has focused on using the benefits of virtualization to consolidate data centers, reduce total cost of ownership, and implement policies for high availability and disaster recovery. Vaughn is a recipient of the vExpert award for both 2009 and 2010, and he has delivered several presentations at VMworld and BEAWorld conferences in the U.S. and Europe. Read his blog at http://blog.mvaughn.us/.