Problem solve Get help with specific problems with your technologies, process and projects.

How to measure the success of your server consolidation project

New contributor Scott Fueless offers a tried-and-true methodology for measuring the success of server consolidation and why organizations often skip this critical step.

Many companies are using server consolidation and virtualization together to reduce costs, lower power consumption levels and increase the efficiency of their data center. It would only stand to reason, then, that these organizations also have a firm grip on how the success of these initiatives should be tracked and measured. The reality is that in most organizations, this simply isn't the case.

In this tip, I'll describe a tried-and-successful methodology for measuring the success of server consolidation via virtualization. I'll also look at why organizations often skip this critical step.

Despite the focus on measuring IT value in recent years, some IT departments have put the cart before the horse where server consolidation and virtualization are concerned. They've deployed virtual machines (VMs) quickly to combat the management challenges of server sprawl in data centers today. A better approach is viewing these initiatives just as they would any other, and any large initiative requires tracking and measuring its impact on cost and efficiency in the data center.

Server consolidation approaches – good, bad and ugly
Most organizations fall into one of three categories when it comes to measuring the impact of consolidation:

  • Hope for the Best: These organizations don't implement specific measurement programs. Instead, they bank on the promise of cost savings and hope that an overall downward trend emerges in base costs, such as power or hardware. While these costs are important, they don't provide a complete view of the operational cost of the data center and the true impact of the initiative. Cost also isn't the sole measure of IT efficiency.

  • Better than Nothing: Some organizations put a few tools and variables in place to measure basic hardware utilization. However, simply tracking hardware utilization without linking it back to cost, doesn't accurately assess the initiative's overall impact on efficiency.

  • The Total Package: Organizations that are getting it right have put in place a system whereby they can track and measure the actual cost of delivering processing to the end-user.

Server consolidation success measurement challenges
Obviously, the first two approaches fall short of truly measuring the impact of a consolidation project over time. So why don't more organizations put the necessary systems in place? These systems can be challenging to implement, which is a key reason most organizations have failed to embrace them. The diversity and complexity of large data centers alone present a huge challenge when it comes to putting common measurement systems in place.

For example, a key objective of server consolidation is to maximize the utilization of each server, thereby increasing efficiency. However new hardware, more often than not, provides greater capacity. Utilization rates on the old boxes may have been maximized, but newer boxes with greater computing power, results in a decline in utilization rates. Hardware refreshes will produce a pattern of peaks and valleys in utilization measurement reporting. Therefore, the measure isn't accurately reflecting progress of the consolidation initiative over time.

Another key barrier to putting measurement systems in place is accurately allocating costs in a meaningful way. Most organizations currently look at combined costs, higher level measures that don't differentiate between different types of servers or even operational divisions. Some large IT departments have the resources in place to accomplish this on their own, but most require third-party help, which sometimes presents yet another barrier.

One way to measure server consolidation success
Most consolidation initiatives are planned leveraging guesswork and estimates of how far an organization can go in terms of increasing efficiency. What's need are measurement systems that provide organizations with a clearer picture of potential efficiency gains, so that they can adjust and adapt consolidation plans as they begin to implement the technology.

In my experience, the best way to measure the progress of server virtualization is to quantify the cost of delivering processing power to the end-user through a cost of CPU/second metric. To do this, organizations first need to determine the capacity and power rating for each box in their data center. I've used a standard formula based on industry standards to trackhardware capacity and to help organizations figure out the breakdown of costs per machine.

A company that takes the time to assess each box, in terms of capacity and cost before virtualization, can then see if the cost per processing unit actually decreases and how fast. This metric gives IT management a true representation of efficiency over time.

Ideally, organizations will implement a measurement program before a consolidation or virtualization project begins. However, even without this starting point to work from, it's always best to have the measure in place regardless of what stage you are at. Without it, your IT group can't tell what the payoff has been, or connect cause and effect of performance changes. .

The benefits of this capability extend beyond a consolidation project to performance management. Detailed measures like cost per CPU second enable organizations to see how other changes in the environment impact efficiency and cost.

A detailed understanding of unit costs can also be incorporated into a broader methodology to model and forecast operational requirements of various business strategies. Is the business planning to aggressively pursue acquisitions, or to expand through organic growth? If you know existing unit costs and utilization rates and can reasonably predict technology trends and their impact on prices, you'll be able to clearly define the costs and risks associated with any particular course of action.

About the author: Scott Fueless is a senior consultant with Compass America, Inc.

Dig Deeper on Virtual machine performance management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close