When a computer runs short of physical memory, page swapping allows the system's local disk drive to serve as supplemental...
memory by swapping memory pages between the disk and physical memory as needed. This is a tried-and-true approach that can prevent system crashes. But disk access is at least an order of magnitude slower than memory access, so page swapping can have a significant performance penalty for virtual machines.
By comparison, memory caching stores frequently-used content in a relatively small portion of memory. As long as the content contained in the cache is needed -- a cache hit -- the access takes place at memory speeds. If not -- a cache miss -- the content must be loaded from a disk.
Memory compression is a variation on caching that is designed to accommodate memory overcommitment without the additional time needed for disk access. Rather than simply sending an idle memory page to a disk swap file, the idle memory page is first compressed and then it is stored to a small area of the VM's memory set aside as a memory compression cache. This frees memory and allows greater levels of memory overcommitment. When compressed memory pages are needed later, it's a much faster process to retrieve compressed pages from the cache, decompress the pages, and restore the pages to working memory than it would be to retrieve them uncompressed from a swap file.
Hypervisors like VMware's ESXi allow administrators to enable or disable the memory compression cache, and to set the compression cache size for each VM. By default, VMware enables memory compression set for 10%, but administrators can change this setting to anywhere from 5% to 100%.
Remember, the memory set aside for the cache is carved out of the each VM's memory allocation. For example, if 1 GB is provisioned to a VM which is set with a 10% cache, 100 GB of the VM will be used for the cache. The idea is that the 100 GB cache may hold 200 GB or more worth of idle content which can be freed from the remaining VM memory for other uses. The cost -- measured in terms of time needed to decompress data -- of the memory compression cache should be more than recovered in the amount of idle memory freed without disk swap penalties.
Hypervisors take a dynamic approach to VM memory
How memory compression can improve consolidation
Fundamentals for troubleshooting VMware performance
Dig Deeper on Server consolidation and improved resource utilization
Related Q&A from Stephen J. Bigelow
Regression tests and UAT ensure software quality and both require a sizeable investment. Learn when and how to perform each one, and some tips to get... Continue Reading
Learn the meaning of functional vs. nonfunctional requirements in software engineering, with helpful examples. Then, see how to write both and build ... Continue Reading
Just because software passes functional tests doesn't mean it works. Dig into stress, load, endurance and other performance tests, and their ... Continue Reading