If you’ve already worked with server virtualization, chances are that you understand the importance of a server consolidation strategy. It’s probably the single most important consideration in a virtual data center.
Simply stated, server consolidation increases the use of available computing resources and allows more virtual machines (VMs) to operate simultaneously on a physical host system. But there are practical limitations to a server consolidation strategy—even with today’s most powerful and virtualization-friendly servers. Too much server consolidation is not a good thing, and administrators need to consider the serious implications of excess consolidation in their data center.
Its role has become so ubiquitous that it’s easy to forget why we need a server consolidation strategy in the first place: Consolidation saves money.
Consider traditional nonvirtualized environments where one application operates on one server. The server rarely uses more than 10% of its total computing resources for the application, and each new service or application brought
Virtualization encapsulates workloads and lets multiple workloads reside together on the same physical server, allowing administrators to use considerably more of the server’s CPU, memory and I/O resources. A solid server consolidation strategy translates into fewer physical servers with fewer power and cooling requirements.
In addition, workloads can be moved between physical servers using live migration, allowing real-time workload balancing and minimizing application downtime for hardware maintenance and repairs. Even the licensing schemes for Windows Server 2008/R2 Data Center Edition make hosting VMs on the same server more cost-effective than ever.
Taken all together, a server consolidation strategy can improve computing efficiency and present a significant cost savings for the modern enterprise.
Too much server consolidation
As with so many other things, too much server consolidation is not beneficial for data centers or their users. But over-consolidation is becoming a more familiar phenomenon as organizations stretch their resources to the limit in a server consolidation strategy. The problem is that virtualization is too easy.
In years past, the addition of a new application or service meant a capital expenditure for the server and the labor to install it. The group or department requesting the expenditure faced financial scrutiny and had to budget for their decisions, and it might have taken weeks—or even months—to implement the deployment.
Virtualization completely changes the paradigm. Today’s companies can provision a new VM on an existing server in a matter of minutes. There’s no new server hardware to buy or install, and the only immediate costs involve the operating system and application licenses.
The desire to make IT faster and more responsive has fostered an “on-demand” climate where computing resources are perceived as a free commodity that is easily depleted with little—if any—regard for the business implications.
Some organizations overburden their servers as a matter of standard practice, with the goal of using 100% of the server’s resources.
“If I’m paying for a four-socket box, and I’ve already paid for my [Windows Server] Data Center licenses, how many servers can I get on that box?” said Todd Erickson, president of Technology Navigator, providing business intelligence to the financial industry.
Other organizations bloat their servers by accident, fitting new VMs onto any server with the available resources to make it work—but without any regard to the business implications involved in this server consolidation strategy. In reality, both approaches are doomed by the serious consequences of server over-consolidation.
Application performance and stability are the first casualties of an over-consolidated server as VMs compete for scarce computing resources. Every application on the server can be affected to one extent or another, including backup, disaster recovery and other data protection tools.
Less extreme cases may only cause hesitation in the application, while more extreme situations may crash one or more VMs—or even crash the entire server. It’s a risk that most administrators choose to avoid because of the corresponding impact on business revenue, customer access and satisfaction, and the potential for data loss with its compliance implications.
A host failure with a high server consolidation ratio on a single host can affect a lot of VMs when it goes down. And once a failure occurs, all of those affected VMs have to restart—either on the original server or across one or more other servers in the data center. The VM recovery process can place significant stress on the entire virtual environment.
Over-consolidation also impairs live
migration capabilities within the environment. Even though most administrators don’t allow
automatic migration, the ability to move workloads on-demand is an essential benefit of
virtualization. When servers are taxed to their limit, however, it’s almost impossible to move
Just consider what happens when a server fails. You can’t start up the affected VMs on other servers because there is no computing capacity available, so all of the affected VMs remain offline until the server is repaired and the VMs are restarted.
Ultimately, many experts suggest remaining within a moderate level of server consolidation where only about 60% to 70% of a server’s resources are normally used. The actual percentage will depend on your particular business situation, but the underlying goal of a server consolidation strategy is to leave some amount of computing resources unused.
Businesses still benefit from greatly improved computing efficiency—up from 5% to 10% utilization in a nonvirtualized environment—while maintaining enough reserve resources to restart VMs without straining the server. In addition, the remaining computing resources allow VM migration between servers to balance workloads or support maintenance efforts.
This was first published in June 2011