Manage Learn to apply best practices and optimize your operations.

Strategies to balance memory overcommitment and mitigate risk

Memory management techniques that allow VMs to share resources are common, but you should follow these tips and strategies to reduce risk.

As CPUs continue to add more cores and get more powerful, increasingly, it's memory holding back VM performance....

It's not that memory hasn't gotten faster; on the contrary, the speed and density of memory have greatly increased, but so has the price. Unfortunately, for the administrator, the memory demands of modern operating systems and applications have increased at the same pace.

However, savvy administrators can help offset some of these costs by using memory overcommitment strategies to stretch their available resources -- provided they don't go too far and risk hurting VM performance.

The challenge of balancing memory overcommitment

VM memory use is a unique challenge because it has several levels that must be managed and accounted for. To get started let's look at how the physical memory is used. For every VM, the hypervisor will allocate some, or all, of the memory needed for the VM. The amount is normally dictated by whether memory reservations or shares are used. In addition to physical memory, the hypervisor will often create a matching disk swap file in the event the host runs out of physical memory. This is very similar to how traditional operating system memory has worked with swap or page files. This creates a layer effect of the guest operating system that can talk to what it thinks is physical memory or its own swap space, all while the physical memory it accesses could be real memory or swap depending on what the hypervisor is doing with the physical memory.

This layered effect of how memory is accessed presents the largest challenge. The administrator has to manage multiple layers of overcommitment, and those layers are not aware of each other. One of the tools the administrator does have is the virtual guest tools. These help to make the guest operating system more efficient in the virtualized environments. Often times this consists of a memory balloon drivers designed to prevent the guest operating systems from consuming additional memory for caching purposes. The traditional operating system was not designed for virtualization, so the addition of these tools can help bridge some of those gaps.

The transparent page sharing dilemma

Now that we have an idea of the layered effect for memory, we also need to address a change that VMware made for security that impacts memory overcommitment. VMware changed the default setting for transparent page sharing to disabled due to security concerns. Transparent page sharing allows multiple VMs to share a single page in memory if it was the same across multiple VMs. This has a substantial effect on the amount of memory saved, but brings new security concerns, because VMs are no longer completely isolated. While you can still use transparent memory page sharing today, you must manually enable it, and who knows whether the technology will exist in future releases.

With this setting disabled, VM overcommitment is a bit easier to track and understand. The tipping point today is when memory starts to be swapped out to disk. Fortunately, it's pretty easy to see the allocation point, and understand that disk swapping -- which can lead to a dramatic reduction in performance -- will begin after that point.

This article is the second in a three-part series about overcommit technologies and tips for managing resources in a virtualized data center. Read part one of this series to learn how to track and regulate CPU overcommit. Read part three for advice on how to cut waste with storage overcommitment.

As you look at memory overcommitment, you do have some flexibility in that not all VMs are created equal. Mixing production with test and development VMs on the same host has some ideal benefits when we look at overcommitment. With the use of reservations and limits with memory you can force noncritical VMs to use disk paging files rather than physical memory in the event of resource contention. This can be ideal, allowing you to maintain your core systems while still keeping everything running -- though low-priority VMs will run at lower performance levels. One additional point to remember is that forcing disk swapping even for test VMs will increase your storage traffic, which can affect other VMs.

Another option for folks with the ability to add local disks is to use solid-state disk drives to hold a VM's paging files. This can allow you to overcommit memory without having some of the same performance concerns. Unfortunately, this approach would mean you'd no longer be able to use migration or vMotion technology with the swap file being located on the host's physical hardware.

Next Steps

How SSD can support memory overcommitment

What's the difference between memory sharing and overcommit?

VM memory management techniques you should remember

This was last published in April 2016

Dig Deeper on Virtual machine provisioning and configuration

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you keep track of memory overcommit to ensure good VM performance?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close