Data center administrators are too conservative to let virtual machine (VM) deployments snowball into a much-ballyhooed sprawl, said VMware Inc. director of data center products Patrick Lin. The real sprawl will take place off the production line in test environments.
Of course, VM deployments in data centers will increase, but automated management tools -- such as those contained in the new VMware Virtual Infrastructure 3 (VI3) -- will keep provisioning and management in hand, Lin said in this interview with SearchServerVirtualization.com.
Lin also described beta testers' reactions to the automated management tools in VI3. The new platform contains existing products ESX Server, Virtual SMP and VirtualCenter. (The latter includes VMotion, which lets you move VMs around a set of physical servers automatically without downtime.) New additions are a distributed file system for managing VMs, VMFS; VMware Distributed Resource Scheduler; VMware High Availability (HA); and VMware Consolidated Backup.
SearchServerVirtualization.com: What features in Virtual Infrastructure 3 are getting the most buzz from beta users?
Patrick Lin: There's huge interest in Distributed Resource Scheduler and the notion of resource pools, which are making utility computing's notion of shared utilities in IT very real. Whereas previous implementations involved million-dollar services' engagement, this is something that can be deployed in a mass format. Central resources can be doled out according to the application's needs without requiring a whole lot of attention to be paid to the hardware underneath.
How does this play out in reducing server provisioning and management chores?
Lin: Rather than having to figure out how many VMs you can fit on one server with a certain amount of capacity, you can see a pool of resources in VI3. For example, if you have a cluster of up to, say, 16 principal servers, and each one has eight gigahertz of processing power and 20 gigabytes of RAM, then you now have a resource pool of eight times 16, or 128 gigahertz of processing power and 320 gigabytes of RAM that you can then dole out as you wish.
When you bring up a VM, you don't need to figure out which server you are going to bring it up on. You just throw it out in the resource pool, and VI3 will automatically put it in the right place to get the resources that it needs. If, for some reason, it's not getting those resources, then VI3 senses that and uses VMotion to place it where the requirements are being satisfied.
That's the additional layer of separation and abstraction that is allowed in VI3.
Are most organizations that virtualize buying new servers to do so?
Lin: When people are getting new hardware in order to run virtualization, I think it's largely because they're interested in legacy re-hosting. The idea is that you have an application sitting on top of an operating system on older hardware that is no longer supported by the vendor, and so you want to be able to migrate to hardware that is supported. The easiest way to do that without disturbing the application is to virtualize it.
It's not necessary to get rid of existing hardware, but in many ways hardware is becoming less important. The old model of managing your infrastructure was to pay very much attention to the hardware -- so much so that every time you changed an application, it required some corresponding change in the server configuration. When you have the level of abstraction that virtualization provides, you don't have to worry about that as much.
Could you offer an example of how virtualization reduces dependence on server management and/or provisioning?
Lin: One easy example of this is when you want a provision server for an application. Without virtualization, you have to buy a server; do the rack-and-stack-'em and the cabling; and then install lots of software. With virtualization software, you create a template ahead of time that has preconfigured a lot of the things that you want to make your golden image, if you will.
Virtualization is similar, in a way, to the appliance concept, in which you have the preconfigured stack of software that you want to deploy in a standardized fashion.
What's next on the virtualization agenda for companies that are already using it in some way?
Lin: The most common paths into virtualization are legacy re-hosting, test development labs and virtualizing small departmental applications that, for stability reasons, needed to be on their own server.
The next step is adding a greater variety of applications into the virtual environment, not just the smaller ones but also the larger ones, like databases. Many capabilities in VI3 are designed specifically to accommodate those larger applications -- things like enabling up to four virtual CPUs inside of the virtual machine and having a larger amount of memory possible there.
In surveys we've done, users are now moving toward standardizing on virtualization. About a quarter of our customer are creating policies that require new application deployments or application expansion to be moved into a virtual machine, unless there's some reason why it can't go in one, like a fax server that requires special hardware that's not supported.
Doesn't that type of policy lead to having as many virtual machines to manage as servers, something people are calling "VM sprawl"?
Lin: I don't think the analogy is that clean. Even if you have an equivalent or slightly larger number of VMs to physical servers, operational effort required to do the provisioning is less, as is the power, cooling and space used.
That said, the fact that it's easier to create virtual machines than it is to provision a physical server does mean that you will need to have some controls over the lifecycle of the VM. One thing that people miss when they talk about VM sprawl is that just because provisioning is easy doesn't mean VMs are going to be provisioned by the score. Look at most production environments; they're pretty locked down. People tell me that it takes an act of God to change something inside the data center. So, it's not like it's just going to go crazy there because people are naturally cautious about making changes.
Where people are generating a large number of VMs is the test development lab or IT staging, because you can try out many things there if you maintain a library of images that allow you to test against a lot of configurations. You'll see large numbers of virtual machines in those environments. We're addressing their issues with products like Akimbi Slingshot; [a virtual lab automation system]. Management here is about taking control of your library of VMs and automating checking them in, checking them out and so on. Also, our ACE product has the capability to expire VMs after a certain point in time. These types of tools help you control when VMs come into being and when they go out of being.