alphaspirit - Fotolia
When a computer runs short of physical memory, page swapping allows the system's local disk drive to serve as supplemental memory by swapping memory pages between the disk and physical memory as needed. This is a tried-and-true approach that can prevent system crashes. But disk access is at least an order of magnitude slower than memory access, so page swapping can have a significant performance penalty for virtual machines.
By comparison, memory caching stores frequently-used content in a relatively small portion of memory. As long as the content contained in the cache is needed -- a cache hit -- the access takes place at memory speeds. If not -- a cache miss -- the content must be loaded from a disk.
Memory compression is a variation on caching that is designed to accommodate memory overcommitment without the additional time needed for disk access. Rather than simply sending an idle memory page to a disk swap file, the idle memory page is first compressed and then it is stored to a small area of the VM's memory set aside as a memory compression cache. This frees memory and allows greater levels of memory overcommitment. When compressed memory pages are needed later, it's a much faster process to retrieve compressed pages from the cache, decompress the pages, and restore the pages to working memory than it would be to retrieve them uncompressed from a swap file.
Hypervisors like VMware's ESXi allow administrators to enable or disable the memory compression cache, and to set the compression cache size for each VM. By default, VMware enables memory compression set for 10%, but administrators can change this setting to anywhere from 5% to 100%.
Remember, the memory set aside for the cache is carved out of the each VM's memory allocation. For example, if 1 GB is provisioned to a VM which is set with a 10% cache, 100 GB of the VM will be used for the cache. The idea is that the 100 GB cache may hold 200 GB or more worth of idle content which can be freed from the remaining VM memory for other uses. The cost -- measured in terms of time needed to decompress data -- of the memory compression cache should be more than recovered in the amount of idle memory freed without disk swap penalties.
Hypervisors take a dynamic approach to VM memory
How memory compression can improve consolidation
Fundamentals for troubleshooting VMware performance
Dig Deeper on Server consolidation and improved resource utilization
Related Q&A from Stephen J. Bigelow
Azure Update Management works with other Microsoft administrative tools to give IT pros a more complete offering to patch operating systems. Continue Reading
Azure Update Management supports a large number of Windows and Linux systems on premises and in the cloud, but there are certain requirements to meet... Continue Reading
Microsoft built Azure Update Management for administrators who require a centralized tool to automate patches for systems both on premises and in the... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.