The number of virtual machines (VMs) that you can fit on each physical server -- or VM density -- is one of the most important metrics to determine the success of your server consolidation and virtualization deployment efforts. It also enhances business outcomes by indicating where your organization can be more cost-effective and improve operating margins.
In this tip, I'll highlight new Enterprise Management Associates (EMA) research that shows how successful the best, average and worst performers are in terms of VM density -- and the results may surprise you.
Effective VM density ratios
In focal interviews I have conducted for EMA, it's clear that some enterprises don't measure VM density. Many organizations follow a rule-of-thumb or best-guess practice for placing VMs on physical servers based on anecdotes and estimates, such as by placing a maximum of four VMs on every server regardless of the workload or resource utilization. These approaches are far from ideal because resources are wasted and, with particularly heavy workloads, performance can suffer as well.
EMA's research shows that the average enterprise actually runs six virtual machines per physical server: a healthy, if not exceptional, consolidation rate. But the best performers substantially exceeded the average by running about 15 VMs on each physical server. The worst performers, by contrast, loaded only two (or fewer) VMs per server.
Of course, this metric varies from one environment to another because it's dependent on several factors, such as the size of physical servers and the types of workloads they run. Nevertheless, taken as an average, it provides useful insight into the status of an organization's virtual deployment and where to take it in the future. This approach is far better than depending on questionable industry estimates, an inefficient rule-of-thumb or on internal water-cooler experts.
Achieving more efficient VM density ratios
So how do you improve VM density ratios? Better virtual systems management (VSM) is certainly part of the answer, especially for larger deployments. In the course of EMA's research, we asked organizations about 18 different VSM tools and disciplines, whether respondents used these tools and, if so, which they use. EMA found consistent results that differentiated the best from the worst performers. The best performers had the following characteristics:
- Among respondents, 39% more likely to use virtual machine management software. Technologies such as VMware vCenter (with VMotion), Citrix XenServer (with XenMotion) and Microsoft System Center's Virtual Machine Manager make it easier to move virtual workloads among servers to optimize resource utilization. This means you can stack more workloads on fewer physical servers and still rapidly respond to unexpected peaks and troughs with minimal impact on active workloads and performance.
- Among respondents, 28% were more likely to use workload automation software. Next-generation workload automation technologies such as IBM Tivoli Workload Scheduler, Tidal Enterprise Scheduler and CA AutoSys understand how and where virtual systems are provisioned so they can run batch workloads on underused servers, or even re-provision unused servers as required. With tools like these, you need fewer dedicated servers because you can more effectively share server capacity between online and batch workloads.
- Among respondents, 25% are more likely to use performance and availability monitoring software. Performance management technologies such as Vizioncore vFoglight, Hyperic HQ Enterprise and BMC Software Performance Management can measure resource utilization across physical and virtual systems so you can determine how much of the physical server your virtual workloads use. This enables you to allocate more virtual workloads onto each server while ensuring that performance does not suffer.
- Among respondents, 12% are more likely to use capacity planning software. Tools such as Novell PlateSpin Recon, Lanamark Suite 2009 and ToutVirtual Virtual IQ allow you to determine historical and real-time server utilization. But perhaps more important, they also provide predictive analysis. These functionalities enable you to load a maximum number of VMs per server, and give you peace of mind knowing that you will have the headroom needed to accommodate expected workload growth both immediately and over time.
EMA's research highlights other VSM tools and disciplines that can contribute to higher VM density, but what I've covered here are of the most important ones. Unfortunately, you're not going to change from being one of the worst performers to one of the best overnight just by implementing one or more of these technologies. But the correlations are convincing and the reasoning behind how these solutions will deliver better VM per-server metrics is sound.
So if you find that your organization fails to meet its target, is among the worst performers or is below the industry average, consider improving your approach in one or more of these areas. Doing so just might help you achieve your virtualization objectives.
Increasing VM Density
- How many VMs you run per server is a key measure of virtualization success.
- The average is six VMs per server, but the best performers are at bout 15 VMs per server.
- Key virtual systems management disciplines that help increase VM density include virtual machine management, workload automation, performance and availability monitoring, and capacity planning.
|Andi Mann, is a research director with the IT analyst firm Enterprise Management Associates (EMA). Mann has over 20 years of IT experience in both technical and management roles, working with enterprise systems and software on mainframes, midrange, servers, and desktops. Mann leads the EMA Systems Management research practice, with a personal focus on data center automation and virtualization. For more information, visit EMA's website|