Defining an ideal virtual server consolidation ratio

Finding the best virtual server consolidation ratio is difficult, and larger, virtualization-friendly servers do not necessarily ease the process.

This article can also be found in the Premium Editorial Download: Virtual Data Center: Maximizing virtual data center consolidations:

The ideal virtual server consolidation ratio can be elusive. Larger servers and higher core counts are tempting,

but licensing models and uptime concerns are keeping IT managers from overconsolidating.

When it comes to designing a virtual server consolidation strategy, how much is too much? How much is too little? If you’re using virtualization, that can be a surprisingly difficult question to answer.

In the early days of virtualization, the goal for a server consolidation ratio was usually “the more the merrier.” Stuff as many virtual machines (VMs) on to a server as it can possibly hold, reasoned IT managers, to get maximum bang for your hypervisor software buck.

But that was then, when virtualization was relegated to handling low transaction, lightweight workloads. These days, virtual servers host an increasing array of mission-critical applications that can’t go down, and certainly not for simple reasons like poor capacity planning. To a large extent, that has put the brakes on over-the-top virtual server consolidation ratios, as punch-drunk IT managers come back to earth and accept the value of proper resource allocation, uptime and capacity planning.

Dreams of maxed-out consolidation ratios also predate July 2011, when VMware Inc., a leading virtualization provider, introduced a new pricing model that encourages IT managers to keep an eye on resource consumption. Whereas VMware used to sell its vSphere suite on a per-processor basis, with no regard to how many VMs ran on a host, the vSphere 5 suite includes a “vRAM” allotment that limits the amount of physical memory that can be used per license by virtual machines. Since in many respects virtual machines are bound by physical memory, this new licensing model limits the number of VMs that can be cost-effectively run on a server.

Thus far, VMware is the only virtualization vendor to adopt a resource-based pricing model, and other virtualization vendors cite their commitment to strictly CPU-based pricing as a competitive advantage. But the writing is on the wall: As workloads move to an increasingly virtualized, cloud-based model, expect vendors to charge for their wares according to underlying resource consumption.

Meanwhile, infrastructure vendors continue to introduce ever larger, more virtualization-friendly servers that make it easy to stuff dozens and dozens of virtual machines onto a single host and diminish the need for optimized VM sizing and placement. But high virtual server consolidation ratios come at a price, not just in terms of hardware and licenses, but also in terms of uptime. The failure of a highly consolidated, improperly configured server can have dramatic consequences for application availability and uptime.

Virtual server consolidation ratios vary widely

With that as the backdrop, what kinds of server consolidation ratios are IT administrators working with these days? The answer is, not surprisingly, it depends.

Back in the days of VMware ESX 3.x, a good rule of thumb for virtual server consolidation was four VMs per core, said Joe Sanchez, an IT manager at hosting provider Go Daddy. Given a dual-processor, quad-core server, for example, that resulted in about eight VMs per host, or an 8:1 consolidation ratio.

More on virtual server consolidation ratios

How to determine your virtual machine per-core ratio

Server consolidation strategy pitfalls: Over-consolidation

Improving VM density ratios

These days, most hypervisors can theoretically support higher numbers of VMs per core, but even so, four VMs of one or two virtual CPUs (vCPUs) per core is still a good guide if balanced performance is the goal, Sanchez said.

“The new servers and ESX versions can handle more VMs,” he said, “but the CPU wait time is still affected and can cause performance issues with too many VMs waiting on the same core.”

And if performance isn’t a concern, what about test and development environments? “Then load the cores up until the cows come home,” Sanchez said.

Walz Group, a provider of regulated document management services, uses that model for their virtual server consolidation strategy and very conservative VM-to-host ratios for production systems, while it uses much higher ratios for environments such as test and development and quality assurance.

“On production systems, we hardly ever run more than 15 VMs per host,” said Bart Falzarano, chief information security officer at the Temecula, Calif., firm, which runs VMware, Cisco dual-processor, four-core UCS B-series blades and NetApp storage configured in a certified FlexPod configuration.

Outside production, however, there are no such restrictions, with virtual server consolidation ratios often reaching 40:1, said Falzarano. He said he knew of environments at other organizations that drove VM densities much higher—in the neighborhood of 100:1.

Thank Intel, AMD for increased server core counts

To a large extent, today’s increased VM densities are nothing to crow about — they’re largely the result of increased server core counts and not any magic on the part of virtualization providers or practitioners.

Indeed, after reviewing customer usage data over time, virtualization management vendor VKernel found that virtualization shops’ increased VM densities track very closely to increases in server core counts.

“I realized that the great consolidation ratios from virtualization you are seeing in your data center have little to do with more efficient use of CPU and memory,” Bryan Semple, VKernel’s chief marketing officer, said in a blog post. “Rather, the ratios have almost everything to do with Intel’s ability to increase core density per host.”

Indeed, current Intel Xeon E7 Westmere processors feature up to 10 processor cores, and the recently released AMD Opteron Interlagos has up to 16. With this kind of horsepower under the hood, it’s possible to approach 100:1 virtual server consolidation ratios on a scaleout server without breaking a sweat. Or, as VKernel’s Semple put it: “Please send Paul Otellini, CEO of Intel, a thank you note.”

The concepts of clusters and resource pools have further served to diminish the focus on individual servers and their configurations. “We don’t think so much in terms of servers, but in terms of an overall resource pool,” said Adrian Jane, infrastructure and operations manager at the University of Plymouth in the UK. The university sizes a server to be able to host its largest VM, currently an eight-vCPU machine with 24 GB of memory running Microsoft Exchange, and let VMware Distributed Resource Scheduler handle the rest.

Knobs and dials help optimize virtual server consolidation ratios

As virtualization becomes more and more mainstream — and as the lingering recession chips away at IT budgets — the urge to optimize server consolidation ratios is increasing, said Alex Rosemblat, VKernel product marketing manager.

If a new server purchase isn’t on the horizon, there are still plenty of things that an IT manager can do to improve server consolidation ratios, he said.

For example, he said, an overwhelming number of VMs are overprovisioned with memory and virtual CPUs from the get-go — not because of administrator error, but because application owners often insist on more resources than they require.

Another common misconfiguration is virtual machine memory limits, which are sometimes set and then forgotten about. That becomes a problem when administrators trying to fix a performance problem assign the VM more memory, not realizing that the limit is preventing it from actually using that extra capacity.

When IT shops first began adopting virtualization, the return on investment was so dramatic that few thought to question whether they could eke more savings out of the environment, Rosemblat said. Take a 100-host environment that is consolidated down to 20 hosts, at a rate of 5:1. “Even with that relatively low density, people were happy with a really great return on investment,” he said.

Fast-forward a couple of years. “People have gotten used to running with only 20 hosts, and costs are creeping up,” he said. Making matters worse is the ease with which new servers are deployed with virtualization — resulting in so-called virtual sprawl. Budgets, meanwhile, are flat or down, and IT managers are actively looking for ways to cut costs, and augmenting their server consolidation strategy is an easy way to go about it.

Real-world requirements keep virtual server consolidation ratios in check

IT managers cite very real concerns about uptime that prevent them from driving deeper server consolidation ratios.

For example, the University of Plymouth runs its virtual environment in an active-active configuration between its primary and leased data centers a few miles apart, said Jane. It strives to run at no greater than 45% utilization, so that if one site were to go down, the other site could take over its entire load with a bit of room to spare.

In addition, the organization leases its equipment, which it replaces wholesale on a four-year cycle. That means that when it comes time to purchase new servers, they must be sized to handle a total site failure as well as four years of growth.

The university went through that server-sizing exercise just last year. The team determined it would need a pool of 180 cores to support its workloads, and ended up purchasing 384 cores, distributed across 32 two-processor, six-core IBM BladeCenter HS22 blades.

That may seem like overkill, but Jane hopes overbuying up front will help avert falling short of resources when it comes time to upgrade. Last year, when it came time to upgrade, the university was running at 55% capacity, which meant it wasn’t possible to upgrade its systems by simply failing over to the secondary site. “We had to choose which VMs to take down, and it was very painful,” he said.

Will cloud come to the rescue?

As with other intractable IT problems, cloud computing is being pitched as a possible solution to the problem of how to balance virtual server consolidation ratios.

The University of Plymouth’s servers won’t be coming off lease for another three years, and perhaps by that point, Jane said, cloud computing will have matured to the point where overprovisioning servers is no longer necessary.

Rather than buying extra capacity, “What I would like is a cloud-based resource topper,” Jane said, to which the university could burst in short- or long-term fashion when extra capacity was needed.

That ecosystem isn’t available yet, Jane said, “but in the next couple of years, I expect the cloud to be mature enough to handle a lot of our services.”

This was first published in May 2012

Dig deeper on Capacity planning for virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close