Problem solve Get help with specific problems with your technologies, process and projects.

Hyper-V 3.0 virtual RAM: Massive memory allocation has its faults

Hyper-V 3.0 supports VMs with up to 1 TB of virtual RAM. But memory allocation is a delicate process, and deploying massive VMs can create problems.

With Hyper-V 3.0, administrators can now assign 1 TB of virtual RAM to VMs. Though impressive, dedicating that much memory to a single virtual machine (VM) could cause serious problems for live migration, clustering and high availability.

More on virtual memory allocation

Dynamic Memory in Hyper-V 3.0 promises higher server consolidation

Memory allocation: An art and a science

Virtual memory management techniques: A beginner's guide

What is virtual memory?

Microsoft’s marketing department has expended a lot of energy in touting Hyper-V 3.0’s new maximum-memory threshold, which is now on par with vSphere 5. But there’s a real-world, memory-allocation issue here: In more than 15 years of IT experience, I haven’t heard of a Windows workload that needs 1 TB of RAM. They may exist, but the vast majority of computational problems these days are likely better served through massive parallel processing, rather than massive processing.

In fact, creating a VM with 1 TB of virtual RAM may create a load-balancing and high-availability nightmare.

More virtual RAM, more potential hiccups
First and foremost, IT pros cannot easily move large VMs. Live migration requires at least two hosts that are powerful enough handle the large VM, which may be cost prohibitive. Additionally, ensuring the appropriate availability for the VM may even require an entire cluster of potential hosts.

The big-VM problem doesn’t stop with the hardware. The presence of large VMs in a cluster can also impede your ability to load balance.

To put this problem in perspective, consider a four-host cluster that houses several virtual machines, including one, big VM. These workloads roughly consume three of the hosts’ physical resources, leaving the fourth host available to protect against a failure. The large VM, however, has the potential to hinder resource optimization within the cluster.

Each virtual machine’s resource usage constantly fluctuates, which means the host must actively load balance. Migrating a VM with 1 TB virtual RAM among the hosts to load balance may require offloading other VMs to make room. That process can further unbalance the load and create a cascading effect until the load stabilizes.

In the event of a cluster host failure, a sizeable VM can also inhibit high-availability activities. Upon recognizing the lost node, the cluster’s surviving members must determine how to best redistribute the failed host’s VMs. With a large VM, this process may require extra time or effort, or may produce a less-than-optimal result (e.g., resource-starved VMs on a taxed host) to bring services back online.

Memory allocation: Part of the hypervisor cold war
The most heated battles in Microsoft and VMware’s hypervisor war are behind us, but each vendor still seems determined to top the other by releasing its latest hypervisor with more features, updates and improvements.

The problems associated with “big VMs” are not exclusive to Hyper-V. VMware recently raised the maximum virtual RAM allowances for vSphere 5 VMs, which has stirred similar concerns among IT administrators. Memory allocation increases in Citrix Systems XenServer has likely prompted similar apprehension, as well.

At the end of the day, these touted improvements to virtual RAM maximums seem more of a feather in each vendor’s hat than anything else. Though your hypervisor of choice may allow for VMs with 1 TB of memory, you still must properly size your VM workloads. Otherwise, your virtual infrastructure may perform at suboptimal levels.

Dig Deeper on Capacity planning for virtualization

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Would you allocate 1 TB virtual RAM to a single VM?
I could see it happening in a single huge SQL server though
its worthless
I believe in distributed computing. Software should be designed to take advantage of smaller computing systems and collaborative computing.
Havent seen an application requring such amount of RAM.
Because if i'm allocate the 1TB memory on 1 virtual machine it means i have already two host of the same configuration its cost effective thing.
i can't think of any reason why I would need to.
it's crazy why would i do that for?
the hardware for the hyper-v or the vmware physical host is to expensive.
Doesn't make sense ..... Increasing storage capacity per VM is only numbers ... no usage in production ....
I will need to prepare some extra architecture most probably, if I would need to run such a piece of SW/application. Thus it depends on the application itself and on its importance. But 1TB of virtual RAM to a single VM is out of my image now.
1TB monster VMs - madness! It's hard enough to control overallocation of memory in the sub 10GB range!!
Its wastage of money and resources