How to design your server virtualization infrastructure
A comprehensive collection of articles, videos and more, hand-picked by our editors
Computer memory has never been cheap and, despite some initial thoughts, 640 KB is not enough. However, operating systems have had a trick up their sleeve for many years now. In most modern operating systems, a swap or page file is created to extend physical memory by using hard drive space.
This space reserved on the hard drive is used to page infrequently used data from memory to the slower and more cost-effective disk. Application needs and data storage have never stopped growing, but memory and storage growth have not been linear. While memory size has slowly increased, hard drives have exploded in capacity.
With the introduction of virtualization, computer memory could handle dozens of virtual machines. Storage capacity was adequate for multiple VMs, but the performance of the drive spindle could not keep up. Single drives have given way to multispindle capacity RAID groups where the number of spindles is more important than capacity. The problem is the drive size is so large that it becomes easy to have terabytes of capacity that you cannot actively use because you're limited by storage IOPS.
For many, solid-state drives (SSDs) have been the answer. With no moving parts, performance is similar to memory speed with the only drawback being a much higher cost than spinning disk. The performance characteristics make SSDs ideal for virtualization; the issue is the capacity, with most SSDs measured in gigabytes instead of terabytes. This makes it critical to use space efficiently. Since SSD space is at a premium, it makes sense to reserve it for higher IO functions and limit its role in static or dead space. Unfortunately, the swap files represent critical but (hopefully) rarely used storage space. As we look to virtualize with SSDs, we need find creative ways to save space where we can.
If we take a simple example of a Windows VM with four processors, 16 GB of memory, a 20 GB operating system and 50 GB of additional storage needs, we will find it actually requires 110 GB of storage space. That is a 36% increase, or an additional 40 GB, over the original request. Where does that extra space come from and why does it seem our storage capacity is disappearing into a Bermuda Triangle? Let's dig a bit deeper and find out what's using storage capacity behind the scenes.
Windows has long used a pagefile.sys for memory pages to be swapped to disk as needed. On desktops that page value is dynamic, but on a server it is often fixed. The default size is very dependent on applications and roles, but a common default is 1.5 times the amount of installed RAM. If your server has 2 GB of memory, you're looking at 3 GB of drive space reserved for your paging file. In our server example, with 16 GB of memory, the page file should be 24 GB. That is a lot of space, but gets even worse. VMware will create a page file equal to the size of the VM memory on the data store in the event VM memory swapping needs to occur. In our example, that would be a 16 GB file. Combined with the 24 GB needed by Windows, we have 40 GB (36%) of space that is required, but that we hope we will never have to use.
On a single virtual machine, that is a high number, but if we multiply that by 100 VMs, the potential dead space is now 4 TB. That's a lot of wasted resources, especially if it's on an SSD. So let's look at a few options that can help to recover some of that space.
Windows Swap File: While it is possible to reconfigure the server not to use any swap space, it goes against Microsoft best practices. Reducing the swap file will reclaim some space, but it can prevent you from collecting a full memory dump in the event of a crash. A slightly more radical option is to create a separate virtual drive in VMware with more swap space than needed on a lower tier of disk and transfer the Windows swap file to it. However, unless the swap file is on the system disk, it will not be able to capture a memory dump.
VMware Swap File: VMware's per-VM swap file is a bit harder to find than the Windows page file. It is actually based on the amount of memory reserved for each VM. So in our example, if 8 GB of memory was reserved for the VM, the VMware page file would only be 8 GB. Now, if you reserved all 16 GB of memory for the VM, you wouldn't have to reserve any disk space, but you have also eliminated your ability for memory overcommit -- making it a less-than-ideal choice. VMware also allows you to move the swap space to another tier of storage. While this can be beneficial to preserve the higher tier storage, it adds another aspect to managing and backing up VM files that are now located in multiple locations, so it might not be ideal.
While it would seem there is no easy way out, compromise can help to ease these pain points. While tradition and Microsoft recommend a page file 1.5 times the amount of installed RAM, many administrators are starting to default to 4 GB, with very few ever going over 6 GB. After all, the main reason you would use the page file is when you are troubleshooting with Microsoft and need the full dump file. Unless you are having issues, why waste your expensive SSD space? With the VMware swap file, memory overcommit is a huge benefit that cannot be wasted. However, most administrators are not overcommitting by 50%, but are trending closer to a safety zone of 20% to 30%. This means a 50% memory reservation on your VMs preserves enough overhead for possible issues while still reducing the storage footprint.
Between the reduction of the Windows page file by 12 GB and the VMware reduction of 8 GB, we can easily reclaim 20 GB of storage. That reduces the storage waste from 36% to 18% with a few minor tweaks that don't significantly increase risk. Of course, increasing the VMware reservation or reducing the Windows swap may not be right for every workload, so be sure to balance storage efficiently with business requirements and risks.