A solid server consolidation strategy can improve virtual machine performance, costs and hardware usage, but don’t let it be your downfall, too. As you begin server consolidation
Watch out for memory overcommit
One of the easiest ways to prevent excessive consolidation in your server consolidation planning is to implement suitable IT practices from the outset, especially when it comes to features such as memory overcommit.
Todd Erickson, president of Technology Navigator, providing business intelligence to the financial industry, points out the dangers in overcommitting the server’s available computing resources—a practice he calls “thin provisioning” the server. For example, VMware vSphere and Citrix XenServer both support a memory overcommit feature that allows an administrator to provision more memory than is physically available on the server.
“Nobody ever does thin provisioning as a best practice,” Erickson said. “If you’re doing any kind of thin provisioning, you’re probably already bumping up against a consolidation ceiling.” The problem is that exhausting resources with features such as memory overcommit can potentially impair the virtual machine’s (VM) performance or stability, he said.
And even when a company embraces resource overcommitment as part of their server consolidation planning, the amount of overcommitment that is considered acceptable is always prone to increase as more VMs are crammed onto physical servers.
With memory overcommit, for example, a server with 48 GB of physical memory and 52 GB of allocated memory—4 GB of memory is overcommitted—may seem acceptable at about 10% overcommitment, but that server is over-consolidated. “You’re setting the stage for problems,” said Erickson, adding that the organization inevitably accepts increasing levels of over-consolidation, further increasing the risk of failures over time.
Monitoring and management makes perfect
Suitable management tools can help identify over-consolidated servers and allow administrators to forestall server consolidation planning problems before they arise.
Experts emphasize that proactive management allows proactive development of the environment—finding and fixing potential resource problems before they even arise. No IT department should have to run out and buy a new server in response to computing resource shortages.
“At the end of the day, you’ve got to be constantly looking into your management console and understand what you’re running at for resources,” said Scott Roberts, director of information technology for the Town of South Windsor, Conn. “You don’t want to get to the point where people are calling with problems.”
The information derived from modern management consoles can also help with other important tasks such as workload balancing and capacity planning. Workload balancing analyzes the distribution of VMs and the resources they require and then generates recommendations for organizing or moving workloads that result in a more efficient environment.
Those tools can sometimes “find” capacity that may have been obscured by careless or inefficient workload deployment. Sound capacity planning practices are needed to evaluate resource use over time and ensure that resources are available to meet the future demands.
Stephen J. Bigelow, a senior technology editor in the Data Center and Virtualization Media Group at TechTarget Inc., has more than 20 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow’s PC Hardware Desk Reference and Bigelow’s PC Hardware Annoyances. Contact him at email@example.com.
This was first published in June 2011