In today’s data centers, organizations are packing more VMs onto fewer physical servers. That's great for server...
consolidation, but without proper memory allocation it could lead to oversubscription of CPU cycles and other computing resources. When considering consolidation, know what the symptoms of resource oversubscription are and how organizations can limit it or even prevent it.
Desktop and server consolidation raises concerns that concentrations of workloads bring with them inevitable risk of downtime and the potential of oversaturation of the core computing resources -- memory, processor, network and disk.
In the ideal world, a hypervisor should be consuming these resources as much as possible, while leaving headroom to accommodate the growth in the virtual machines (VMs) as well any unexpected surges in workload. Expressed simply, a hypervisor consuming only 1% of memory, CPU, network or disk is underutilized. Likewise, a hypervisor running at 99% of memory, processor, network or disk is likely to provide poor performance and be a major bottleneck in any clustered environment.
Consolidation and oversubscription: Striking a happy balance
It’s the administrator’s job to strike a happy balance between these extremes. It’s as much an art as a science, and the administrators who are successful know their environments and their capacity to work with application owners to resolve any performance problems.
VMs don’t typically cause most performance problems. It’s badly configured applications that are usually the culprits. The virtualization layer always takes the heat on this because application owners will say that the service worked brilliantly on a physical server. When they say this, they are subconsciously betraying the fact that they are skeptical of virtualization to some degree.
Unless the VM was created by a physical-to-virtual (P2V) process, it is likely that the way the application was configured does not mirror the physical system. That said, most customers paradoxically experience improvements in performance because virtualization projects often bring in new and improved hardware at the server and storage layers.
Memory is king
The biggest single constraint in virtualization is proper memory allocation. Environments run out of memory before they run out of CPU cycles or bandwith to the network or storage array.
To avoid this, begin by “right-sizing” your VMs relative to the demands of your application. This means resisting the demands of application owners who request VMs with the same specification as the physical server.
A dose of reality is needed here. It’s totally unrealistic to think a Tier 1 application such as Microsoft Exchange, SQL or Oracle will sit happily with just the memory allocation needed to make a 64-bit operating system. In general, most environments are risk-averse, and administrators have a tendency to over-spec the VMs in hopes that they will not experience any blowback from disgruntled application owners.
This type of approach to memory allocation should be avoided at all costs. VMs that go beyond spec cost the environment in wasted resources that could have been allocated to more deserving VMs. It also systematically and unnecessarily degrades the performance of features and wastes resources elsewhere.
Memory allocation issues: Wasted resources
Another area to review is any system that has been converted to a VM through the process of P2V. More often than not, IT folks choose not to downgrade memory allocation, leaving the VMs with the same allocation as the original physical machine. This can contribute to a massive waste in resources.
Remember why you virtualized in the first place? You had many physical systems that were using only 10% to 20% of their resources, taking up space and power in the data center. If you think you are experiencing memory problems, check the following areas:
- Does the VM have the correct amount of memory allocation?
- Has the physical server run out of memory?
- Is there swap activity taking place inside the guest operating system and at the hypervisor level?
- Are there unusually high statistics in the hypervisor’s memory management systems?
Mike Laverick is a professional instructor with 17 years experience in technologies such as Novell, Windows and Citrix. Involved with the VMware community since 2003, Laverick is a VMware forum moderator and member of the London VMware User Group Steering Committee. He is also the owner and author of the virtualization website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/Virtual Center users.