Given the cost of storage for virtualization and the rapid proliferation of virtual machines (VMs), it’s important to use storage space efficiently.
Host servers often store the virtual hard disk (VHD) files for individual VMs on expensive storage mechanisms such as SANs. But to optimize virtualization storage space, you’re better off using dynamic storage allocation -- also known as thin provisioning -- which shrinks and expands VHD files to accommodate storage needs. Of course, the up-and-down flexibility of thin provisioning also has its ups and downs.
What is dynamic storage allocation?
Simply put, dynamic storage allocation is the ability to add storage to a VM on the fly, as storage is needed. This virtualization storage method helps reduce wasted storage space in your infrastructure.
When you create a new VM, you may not know exactly how much storage space the VM will require, and you don't want to risk running out of disk space. Before thin provisioning came along, many administrators would simply allocate a bit more storage space than what they thought the VM would need. The problem with this approach to virtualization storage is that it wastes disk space. Each VM will consume valuable storage space that may never be used.
Dynamic storage allocation allows you to set the maximum size of a VHD file, just as you always have. The difference is that the actual physical disk space is not consumed until the VM needs it. The VHD file starts out very small, regardless of how large you tell the hypervisor it needs to be. As you add data to the virtual hard disk, the VHD file dynamically expands to accommodate the data.
Many virtualization platforms come with dynamic storage allocation features. For virtualization storage, Microsoft Hyper-V uses thin provisioning by default. VMware ESX uses a virtual disk format known as zeroed thick by default, but the hypervisor also supports thin provisioning.
Thin provisioning certainly has its advantages, but this virtualization storage strategy has some negative aspects as well. For starters, neither Hyper-V nor VMware support the automatic reclamation of dynamically allocated storage space. In other words, if you write a large file to a thinly provisioned virtual hard disk, the VHD file will expand to accommodate the data. If you later delete the large file, the underlying file won’t shrink. Other data can reuse the deleted space that file had previously used, but it’s not easy to reclaim that space.
Virtualization storage management challenges
A bigger problem with thin provisioning is that it can complicate virtualization storage management. The VM doesn’t use physical hard disk space until it’s needed, so it’s possible to overcommit physical storage resources. There’s nothing stopping an administrator from creating 10 1-terabyte virtual hard disks on a half-terabyte logical unit number (LUN), for example. The problem is that as the VHD files expand, the LUN may eventually run out of physical storage space. When physical storage runs low, there are only two ways to deal with it: You can either move the VM to another storage pool, or you can add additional storage to the pool.
When you use thin provisioning for virtualization storage, you have to keep track of physical storage resources. This stipulation has driven some administrators to avoid thin provisioning despite its benefits. However, management tools exist to track storage consumption. VMware vSphere even includes an alerting mechanism that warns you when a server is running low on storage.
Fragmentation from dynamic storage allocation
Another major issue with thin provisioning is that if a single LUN contains multiple thinly provisioned VHD files, then fragmentation occurs as the files expand. You can avoid fragmentation by using a separate LUN for each virtual hard drive, but that defeats the purpose of using thin provisioning in the first place. Instead, try using a virtualization-enabled disk defragmentation product such as Diskeeper or Raxco.
Dynamic storage allocation has its pros and cons, but in most cases, the benefits outweigh the risks. It’s important to keep track of virtualization storage resources and use defragmentation software to maintain optimal performance.
This was first published in June 2011