These differences -- plus the fact that Hyper-V lacked dynamic memory allocation until this year -- have spurred much debate among VMware and Microsoft users about the merits of each feature. In this face-off, two virtualization experts debate the pros and cons of Hyper-V Dynamic Memory and VMware memory overcommit.
Hypervisors use various virtual memory management techniques to keep virtual machines (VMs) running as memory conditions grow increasingly tight.
In Microsoft Hyper-V R2 Service Pack 1, the Dynamic Memory feature uses a memory-ballooning process that is similar to VMware vSphere’s. Built into Hyper-V’s Integration Components is a guest kernel enlightenment that allows a VM to communicate with the host to recognize which memory pages are (and are not) in use. As such, the host can add and remove guest memory as required.
Although Microsoft uses a technique similar to vSphere’s, the user experience with Hyper-V Dynamic Memory is quite different. Dynamic memory allocation means no longer assigning a specific quantity of memory to VMs. Instead, VMs simply claim the memory they need, with the host performing a rebalancing pass every second. As a result, memory is always right-sized to workload requirements, which greatly increases VM density potential.
Hyper-V Dynamic Memory also has a greater range of configurable options than does VMware memory overcommit. Users can assign limits to problematic VMs with memory-hungry workloads, and if memory contention occurs, users can prioritize specific VMs. A configurable buffer value also identifies how much extra memory is reserved for short-term needs between rebalancing passes.
Notwithstanding its greater configuration control, Hyper-V’s complete elimination of static memory assignment represents a superior approach to the ballooning technique of vSphere and other platforms. With Hyper-V Dynamic Memory, VMs simply take the memory they require, which eliminates the guesswork of memory assignments.
Memory overcommit is just part of VMware’s approach to memory management, which also includes Transparent Page Sharing, memory compression and ballooning. With this transparent approach, applications and the OS always see the same amount of memory, regardless of what the hypervisor does behind the scenes.
You can tweak VMware’s memory management settings, but the hypervisor handles everything automatically. This approach allows you to maximize physical memory and achieve much better VM density.
In response, Microsoft has finally added the Hyper-V Dynamic Memory feature. Dynamic memory allocation controls only the amount of physical memory allocated to a VM, and it lets you define only the initial, minimum and maximum memory amounts. Then the hypervisor adds and removes memory as needed.
Hyper-V Dynamic Memory has some big problems. It works only with versions of Windows that support the ability to hot-add RAM; Linux and other OSes are not supported. (VMware’s techniques work with any OS.)
Even worse, you can’t hot-remove RAM from a VM with Hyper-V Dynamic Memory; you must reboot a VM to reduce the amount of memory. Adding or removing memory from any running server is a bad idea. Why not right-size it to begin with? Changing amounts of memory can really mess up the performance of applications running within a VM, and few applications can deal with that.
Microsoft’s memory management approach is very weak; the company should have once again copied what VMware does. If you’re going to innovate, do it right. Otherwise, just bite the bullet and emulate.
This was first published in May 2011