Virtualized computing has grown to depend on server memory. Every virtual server needs memory space for the host operating system, and every virtual machine (VM) will run as a unique file image held in memory. IT administrators routinely deal with memory allocation issues -- even on modern servers with 1 TB of memory or more -- and memory shortages will directly limit the number of VMs that a server can support. To combat these limitations, most hypervisors employ a set of memory management techniques to reclaim unused memory or make use of a second storage source, as in memory paging. This tip on the various forms of memory paging is the second in a two-part series on memory management techniques.
VMs often require larger amounts of memory during startup and then scale back the amount of memory needed during normal runtime. This is normal, but can cause a potential problem with
Smart paging is an adaptation of traditional memory paging (or swap files) added in Windows Server 2012, where disk space is used to supplement shortages of solid state memory. Paging ensures that the VM will not crash due to lack of memory, but performance is reduced because disk access is at least an order of magnitude slower than memory.
Smart paging is used only during VM restarts when there is no memory available and none can be reclaimed. But smart paging is not used when memory is oversubscribed or when the VM fails over in a Hyper-V cluster. In addition, smart paging will eventually stop within about 10 minutes, once the VM is restarted and its memory needs drop back within the available memory space.
VMware ESXi implements an additional memory paging technique called hypervisor swapping. This approach is similar in principle to Hyper-V smart paging where the hypervisor controls swapping between VM memory and a disk swap file. However, where smart paging is designed as a temporary measure to facilitate VM startups, hypervisor swapping provides long-term page swap support for some amount of memory reclamation.
Unfortunately, hypervisors generally have no insight about which VM memory pages are unused, so the hypervisor doesn't know which pages are best to swap out. Although the technique can certainly reclaim a known amount of memory, the content that is swapped out may routinely need to be swapped back into memory because it's actually needed by the VM at that moment -- severely impacting VM performance. In addition, hypervisor swapping may conflict with ballooning (memory paging determined within the VM operating system) and further reduce performance. Consequently, hypervisor swapping is a "last resort" memory reclamation technique.
Ballooning is another way that VMware ESXi handles overcommitted memory through the use of supplemental disk space. However, rather than the hypervisor handling the page swapping, ballooning is designed to force the VM's guest operating system to decide which memory pages are less important and swap them to disk -- freeing more physical memory space for the VM. This is often regarded as a superior method of memory paging, because the OS memory manager is more aware of memory use within the VM than the hypervisor.
Ballooning starts by installing a balloon driver in the VM's operating system. When the VM starts getting low on physical memory, the balloon driver artificially causes the guest OS to think there is even less memory. This "inflates" the balloon and causes the OS to use its own memory management algorithm to start swapping out less important memory pages to a disk swap file, freeing that space for reclamation. When that physical memory is freed again, the balloon driver "deflates," releasing that artificial pressure on memory and allowing the OS to swap the pages from disk back into memory again.
Server memory has become a gating factor for consolidation and performance, and IT administrators routinely struggle to strike a balance and provision VMs that keep these two factors in balance. Hypervisor vendors are acutely aware of memory limitations and their side-effects and have developed a mix of techniques that can extend the effective memory available and maximize workload consolidation.
This was first published in June 2013