The number of virtual machines that a server can support has always been limited by the sheer amount of computing...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
resources available on the physical hardware. Most resources, such as processor cycles, storage I/O and network bandwidth, can be shared with relative ease. The idea is that workloads are not all busy 100% of the time, so sharing -- or overcommitting -- resources can allow higher levels of workload consolidation than the total amount of those resources might suggest; with minimal, if any, performance impact to the workloads.
However, server memory has traditionally been a fixed resource. Since each VM exists in memory as a complete image of the application and its data set, it was important to provide enough memory to host each VM. Otherwise, the server would need to utilize disk-based swap files to supplement memory shortages -- often with devastating performance penalties for the affected VM. But this traditional paradigm has been changing as techniques evolve to overcommit memory or allow memory sharing
Memory overcommitment provisions more memory in total than the host server has available. For example, suppose a simple host server has 4 GB of physical memory; memory overcommitment might allow perhaps six 1 GB VMs to be provisioned. At first blush, this is an extremely dangerous endeavor because two workloads cannot share two different pieces of data at the same memory address at the same time -- at least not without swapping data to disk first.
But designers quickly realized that many VMs do not use all of the memory that is provisioned to them, and this memory is basically wasted while it sits idle and unused. Hypervisors are designed to identify idle memory capacity and make that memory available to other VMs that need it. If no other VMs need additional memory, the overcommitted space can be used to provision additional VMs. Virtual memory configuration settings such as "Shares" can delineate the VM's priority in the memory pool, while "Reservation" sets the minimum memory that is guaranteed for the VM to ensure adequate memory at all times.
It's also no secret that VMs can share substantial amounts of content. For example, suppose the six 1 GB VMs provisioned above all run Windows Server 2012 R2, and two of those VMs run the same business application. Five of the six copies of Windows Server 2012 R2 and one of the two application copies would effectively be redundant. Memory sharing technology allows VMs to use one common instance of identical memory pages. This can lower the total amount of memory used by VMs and support higher levels of overcommitment. Memory sharing offers the same kind of efficiency in memory that data deduplication offers for disk storage.
It's important to note that memory overcommitment and memory sharing are both highly dynamic technologies that are influenced by the total computing load and amount of common content. For example, lightly-used VMs may free memory for overcommitment, but as usage picks up and memory demands increase, the hypervisor will need to return memory to the VM or risk swap file performance penalties. Similarly, VMs with different OS versions, applications and data may have far less common memory pages to share. VM migration and workload balancing will alter common memory content and affect memory sharing.
A guide to virtual memory management techniques
Memory paging options to boost VM performance
Adjusting Hyper-V Dynamic Memory options
Dig Deeper on Server consolidation and improved resource utilization
Related Q&A from Stephen J. Bigelow
Photon OS optimizes VMware Photon platform deployment, not only in vSphere but in GCE, EC2 and more. Follow these steps to learn how to run Photon OS...continue reading
Performance problems can be caused by a number of things, including overprovisioning and poor vCPU selection and assignment to VMs. Use these ...continue reading
Think about what types of workloads are running on a VM before assigning compute resources, and consider using vCPUs from different cores for ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.