As virtualization takes hold, computer systems vendors have vigorously promoted a vision of elastic computing, also known as utility computing -- or, by some definitions, cloud computing. In this imagined future, a central pool of resources is provisioned -- and deprovisioned -- automatically for end users on-demand.
IT managers say the goal of elastic computing with end-user self-service is one that they share with vendors, and many are already hard at work trying to get there. At this stage of the journey, though, they say they’ll reach the finish line only with significant changes to today’s virtualization management tools.
Specifically, IT managers say, true utility computing with self-service provisioning requires four elements which are still a work in progress: centralization and maximal virtualization of resources; proven orchestration and provisioning portals linked to every aspect of the environment; root-cause analysis for heterogeneous infrastructures; and, finally, chargeback that’s integrated with an organization’s existing accounting practices.
Moves against infrastructure, organizational headwinds
For resources to be dynamically provisioned and delivered as a service, they must first be centralized into a pool and virtualized. This task, however, is sometimes easier said than done, resulting in some measure of
Integrating components of the infrastructure to respond to dynamic and self-service provisioning tools within private clouds is no easy feat, even in some of the largest data centers.
Virtualization and infrastructure vendors are aware of this and are moving to create their own pre-integrated infrastructure stacks to accelerate virtualization. Examples include offerings from VMware Inc. and Cisco Systems Inc. as well as EMC Corp.’s Vblock, or Hewlett-Packard Co.’s BladeSystem Matrix.
The value of the pre-integrated infrastructure stack goes beyond time to deployment. CareCore National, for example, a health insurance benefits management company headquartered in Bluffton, S.C., was well down the path of deploying EMC storage, VMware virtualization, and Cisco blades, before engaging with the Virtual Compute Environment (VCE), said William Moore, CareCore’s CTO.
“Where VCE folks bring value is … to be able to integrate the gear but then really do the next couple phases” to take advantage of the new infrastructure, Moore said. CareCore is working with VCE to add more automated orchestration to the environment, with the eventual goal of creating a hybrid cloud that can be used as a services exchange with other health care companies.
At the same time, CareCore was able to take a rip-and-replace approach that not all companies can do, Moore said. “I was able to say, ‘Let’s be very ruthless. …If . . . I can’t make [a] component of our operational environment fit into this box, I will saw the limb off and grow a new one that fits,’” he said.
Toward more customizable provisioning portals
Once a centralized resource pool is set up, it needs an interface so that users can request pieces of it and administrators can automatically fill these requests. Some self-service provisioning tools are available, but they are in their infancy.
Brian Alexander, a system architect at a large software company based in the Northwest that has 50 data centers and more than 7,000 virtual machines, oversees a self-service environment based on VMware Lab Manager. “We set up the back end and the users don’t ever care about what’s [there],” he said. “They go to a Web page, they pull resources on demand whenever they want them, [then] they throw them away when they’re done.”
But recently, VMware revealed that it will phase out Lab Manager, the company’s development and testing tool, and replace it with its new vCloud Director. Alexander says he plans to move to vCloud Director, but not right away. For one thing, vCloud Director does not support Linked Clones, a feature that offers space-efficient provisioning of storage resources.
Meanwhile, CareCore’s Moore said he’s considered putting together a “stack” of commercial management and orchestration tools with a tool like vCloud Director but has been thwarted by a lack of comprehensive integration.
Beyond provisioning and monitoring, Moore said that elastic computing also requires service discovery and mapping to get a sense of the normal configuration “blueprint” of the environment; service management, including tying in help desk services with the rest of the infrastructure’s root-cause analysis engine; data center automation with configuration compliance; and, under Moore’s definition of resource management, tools that tie all these technical details to line-of-business processes.
”Right now, it’s a cocktail of components … and we’re still experimenting with the ingredients,” Moore said.
Automated management: Early days
Once a resource has been provisioned, it requires ongoing monitoring and maintenance, a trickier concept in an elastic computing environment than in a traditional, static infrastructure.
To deal with the flood of data generated by a dynamic environment, vendors have begun to offer products that add automated root-cause analysis on top of traditional monitoring capabilities, but they’re still very new. For example, VMware’s vCenter Operations tool, which can automatically correlate data from other management tools, is just coming out of beta this month. Third-party players in this space are also working on their own answers to this, including Netuitive Inc.
IT managers also worry about how management will fare as they move to the cloud. CareCore’s Moore said his organization looks to offer a services exchange with business partners using public and hybrid clouds and sees a lack of monitoring tools that address the public and private cloud through the same consistent interface.
Wanted: Granular chargeback, with teeth
Even if an organization manages to pull off centralization, comprehensive orchestration and automated root-cause analysis, who’s going to pay for it all? Updating traditional IT chargeback models to consider these new virtualized, dynamically provisioned environment is key to their success, users say.
The Computing & Information Services (CIS) department at Texas A&M University offers self-service IT resources to campus researchers and displays power and cooling savings on researchers’ portal, said Tom Golson, the chief systems engineer for the infrastructure systems and services group. That’s proven to be a big selling point for the service.
“We’re not out to make money – we’re not allowed to make money – we just have to recover our costs,” Golson said. “If we can do things in a sufficiently inexpensive way” and make it attractive “to get rid of … one-off machine rooms that are disproportionately expensive, then that’s a win for everyone.”
For others, chargeback still feels like wishful thinking.
”It would be great if there was a way … of doing performance tuning and allocating more resources to something,” all tied in to a self-service portal with chargeback, said Chris Rima, supervisor for infrastructure services at a utility provider in the Southwest. But “chargeback is a very manual process today,” he added, and it will take time before that kind of granular and dynamic chargeback process can take root within enterprise organizations.
Beth Pariseau is a senior news writer for SearchServerVirtualization.com. Write to her at email@example.com.
This was first published in March 2011