Organizations are starting to look beyond the immediate business case and focusing on ongoing management processes,...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
where measurement plays a key role. Organizations that get it right have a system in place to track and measure the actual cost of delivering processing to the end-user.
A big focus around virtualization is maximizing utilization. While certainly one of the aims of virtualization is to cut back waste and inefficiency, this isn't the best way to measure the success of your initiative. More often than not new hardware provides greater capacity. Utilization rates on the old boxes may have been maximized, but hardware refresh that brings in newer boxes with greater computing power will result in a decline in utilization rates. This will produce a pattern of peaks and valleys in utilization measurement reporting. Therefore, the measure doesn't accurately reflect progress over time.
Cost is another key measure, and while from a macro level costs are easily tracked, most organizations haven't gotten their heads or systems around measuring on a usage basis in a virtualized infrastructure. Accurate chargebacks are instrumental in showing business units the direct benefits of virtualization, but some IT departments simply don't embrace the concept.
CIOs may feel that defining specific costs for services leads to a relationship more similar to the type the business units would have with an outsourcer: Where IT is forced to be more focused on pricing and revenue and less on partnering for business process improvement and developing new revenue streams for the company. IT departments may also fear that linking actual costs to value delivered sets them up for comparisons to commercial service providers. These concerns are well founded in many cases, so breaking down the barriers can be a challenge.
Linking costs to services
IT needs to understand that the least costly way to run any IT operation of significant size is with a well performing internal operation. If this is what exists then there is no danger that an outsourcer can offer better service at a lower price. The decision to outsource or not is frequently driven by factors that go way beyond simple cost comparisons, and having a chargeback system in place that links costs to services is not going to make outsourcing more likely. A good chargeback system is a win/win for IT and their internal customers because:
The business units get a clear understanding of what drives their IT costs, allowing them to manage their demand for IT resources and prioritize new projects properly. This allows them to forecast future costs more accurately and improve their budgeting process. They also will generally feel that IT is treating them more fairly, since older and simpler cost allocation methods are usually neither equitable nor defensible. In the past, business units felt that they were paying too much but had no tools to identify why. With accurate, equitable and defensible chargeback mechanisms in place, they know exactly why they are paying what they pay.
- The chargeback model also provides IT a more meaningful way of measuring the cost of their services in a way that makes sense to end users. Improvements in processes will result in increased value of the service and/or reduced chargeback that will have a clear impact on the profitability of the business units. Business units appreciate the transparency and will tend to make fewer inaccurate assumptions about the value of their services.
- Finally, chargeback gives IT a lever that can be used to motivate the business to move away from older, more expensive technologies or configurations to newer ones that can help the organization as a whole drive down costs (e.g., virtualization).
The best way to measure the progress of server virtualization is to quantify the cost of delivering processing power to the end-user through a cost per CPU second metric. Organizations first need to determine the capacity and power rating for each box in their data center. At Compass, we use a standard formula based on industry standards for tracking hardware capacity, and apply this to help our clients identify costs per machine. A company that takes the time to assess each box in terms of capacity and cost before virtualization can then see if the cost per processing unit actually decreases and how fast. This metric gives IT management a true representation of efficiency over time. In addition, most consolidation initiatives involve guesswork and estimates of efficiency gains. A unit-based measurement system provides a clearer picture of performance, so that consolidation plans can be adjusted and adapted as the technology is implemented.
Ideally, you implement a measurement program before a consolidation or virtualization project begins. However, even without this baseline as a starting point, regardless of what stage you are at, you need to have measures in place to gauge improvement over time.
The benefits of measurement extend beyond a consolidation project to performance management. Detailed measures like cost per CPU second enable organizations to see how other changes in the environment impact efficiency and cost. While these systems require an investment in terms of time and sometimes, the help of a third-party, the ability to track and measure the impact of significant changes in the data center is invaluable.
Metrics like CPU seconds or utilized CSpec (a measure of how much compute capacity is actually utilized across the server farm) are directly linked to the delivery of IT value. They represent in measurable terms the amount of work that the compute platforms are doing (sure, computing power can be wasted by the end users, but that's an issue that goes beyond measurement of the IT department itself). A key performance indicator (KPI) such as hardware cost per CPU second is the best way we have to indicate the unit cost of computing value delivered. In a virtualized environment we can demonstrate that this is true because this KPI will always improve as more virtualization is used to avoid more hardware purchases, regardless of the power of the new servers coming in the door (a less appropriate KPI, like hardware cost per server, might actually go up even as more processing is being done on less hardware).
About the author:
Scott Fueless is a senior consultant with Compass America, Inc.