With Hyper-V 3.0, administrators can now assign 1 TB of virtual RAM to VMs. Though impressive, dedicating that...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
much memory to a single virtual machine (VM) could cause serious problems for live migration, clustering and high availability.
More on virtual memory allocation
Dynamic Memory in Hyper-V 3.0 promises higher server consolidation
Memory allocation: An art and a science
Virtual memory management techniques: A beginner's guide
What is virtual memory?
Microsoft’s marketing department has expended a lot of energy in touting Hyper-V 3.0’s new maximum-memory threshold, which is now on par with vSphere 5. But there’s a real-world, memory-allocation issue here: In more than 15 years of IT experience, I haven’t heard of a Windows workload that needs 1 TB of RAM. They may exist, but the vast majority of computational problems these days are likely better served through massive parallel processing, rather than massive processing.
In fact, creating a VM with 1 TB of virtual RAM may create a load-balancing and high-availability nightmare.
More virtual RAM, more potential hiccups
First and foremost, IT pros cannot easily move large VMs. Live migration requires at least two hosts that are powerful enough handle the large VM, which may be cost prohibitive. Additionally, ensuring the appropriate availability for the VM may even require an entire cluster of potential hosts.
The big-VM problem doesn’t stop with the hardware. The presence of large VMs in a cluster can also impede your ability to load balance.
To put this problem in perspective, consider a four-host cluster that houses several virtual machines, including one, big VM. These workloads roughly consume three of the hosts’ physical resources, leaving the fourth host available to protect against a failure. The large VM, however, has the potential to hinder resource optimization within the cluster.
Each virtual machine’s resource usage constantly fluctuates, which means the host must actively load balance. Migrating a VM with 1 TB virtual RAM among the hosts to load balance may require offloading other VMs to make room. That process can further unbalance the load and create a cascading effect until the load stabilizes.
In the event of a cluster host failure, a sizeable VM can also inhibit high-availability activities. Upon recognizing the lost node, the cluster’s surviving members must determine how to best redistribute the failed host’s VMs. With a large VM, this process may require extra time or effort, or may produce a less-than-optimal result (e.g., resource-starved VMs on a taxed host) to bring services back online.
Memory allocation: Part of the hypervisor cold war
The most heated battles in Microsoft and VMware’s hypervisor war are behind us, but each vendor still seems determined to top the other by releasing its latest hypervisor with more features, updates and improvements.
The problems associated with “big VMs” are not exclusive to Hyper-V. VMware recently raised the maximum virtual RAM allowances for vSphere 5 VMs, which has stirred similar concerns among IT administrators. Memory allocation increases in Citrix Systems XenServer has likely prompted similar apprehension, as well.
At the end of the day, these touted improvements to virtual RAM maximums seem more of a feather in each vendor’s hat than anything else. Though your hypervisor of choice may allow for VMs with 1 TB of memory, you still must properly size your VM workloads. Otherwise, your virtual infrastructure may perform at suboptimal levels.