Over the past few years, the conventional wisdom concerning virtual machine (VM) configuration has changed. There are two VM configurations many administrators got wrong during virtualization’s early
Virtual machine configuration affects server consolidation ratios: If resources are optimized in each VM, more VMs can fit on a physical host. Greater consolidation nets more capacity, and more capacity gives your business more flexibility to run additional VMs when needed.
If you add unnecessary memory and processing units to VMs, fewer VMs can then fit on a host. So the virtual machine configuration for memory and processing has a noticeable effect on performance and capacity.
Misguided memory beliefs
The prevailing guidance for virtual machine configuration today suggests that a VM’s quantity of assigned memory should be the amount of memory that a VM requires. If its activities consume only 2 GB of RAM, there’s no benefit to assigning it 4 GB.
This practice stems from two incorrect beliefs: that having more assigned memory means having a faster computer, and that unassigned memory will just get ballooned out by the hypervisor, allowing for more space the VM can use for other resources.
In reality, having more assigned memory than a VM needs only creates more empty space than the VM can use. Measuring that VM’s memory consumption over time will show that it doesn’t actually use that memory space.
Major hypervisor vendors have extolled their memory methods, which many people boil down to this: “It doesn’t matter if I assign too much memory to VMs.” But this belief in memory overcommit couldn’t be further from the truth.
Even though a hypervisor is capable of memory overcommit in the virtual machine configuration, it’s better not to overassign memory, because it often takes more resources to actually transfer unused memory from one VM to another.
In the Best Practices section of Understanding Memory Resource Management in VMware ESX 4.1, VMware recommends setting the VM memory size to “slightly larger than the average guest memory usage.” The “slightly larger” size is suggested to accommodate spikes in VM workload, such as logins, backups, malware scans or other non-standard activities. So if you assign more memory than VMs need just because you can, consider rethinking your virtual machine configuration.
Overdoing the overcommit: Assigned processors
Memory overcommit can be bad, but overcommitting processors is even worse. Assigning multiple processors to a VM that runs a single-threaded application, for instance, can hurt its performance because of how a hypervisor schedules a VM’s processing needs.
Today’s hypervisors no longer need to schedule virtual processors at exactly the same time, but additional resources are required for a consistent view of processors whose activities are not scheduled at the same time -- known as co-de-scheduled processors. If a virtual server’s workload is not multithreaded, it doesn’t need the capability to run that workload across multiple processors. Basically, by giving a single-threaded VM multiple processors, you’re paying an additional tax for no additional benefit.
It can be challenging to determine whether you need multiple processors in a virtual machine configuration. Windows tends to balance process threads among available processors, so it’s difficult to know if a workload uses multiple processors or whether its activities are being automatically load-balanced. This load balancing also incurs overhead, which only further increases that extra tax. It also makes it difficult to know whether a workload is indeed multithreaded or not.
Starting with only one virtual processor is a best practice for virtual machine configuration, unless you are certain that a VM’s activities are multithreaded and can benefit from having multiple processors. You might be tempted to take advantage of memory overcommit and the ability to add processors, but it’s often not necessary.
This was first published in April 2011