Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

The 'free VM' and other myths that lead to sprawl

Despite the efficiency consolidation brings, even small VMs consume resources; and there are real financial consequences to spinning up new workloads.

Technology allows the virtualization administrator to create a server in minutes, providing a previously unheard of flexibility and enabling IT to move at the oft-increasing speed of business. It's an agility that helps IT shift from a perceived business barrier to a business enabler. However, this new golden age of IT working in lockstep with the needs of the business comes with a price: The continual and sometimes-runaway growth of a virtualized infrastructure.

In the days before virtualization, IT provided resources in a fairly straightforward way. A business unit needed a server for an application, and IT made the purchase, typically through the requesting group's budget or a project budget. After a few weeks or months it was set up and deployed. Because this required both time and money, IT staffers planned carefully to ensure proper design and deployment.

These practices changed with the arrival of virtualization -- both a blessing in flexibility and a curse, giving rise to the myth of the free VM and enabling the phenomenon known as VM sprawl.

How much is too much?

VM sprawl comes in two primary forms: the quantity of VMs and the size of those VMs.

If an application owner is given the choice between two CPUs or four CPUs, odds are the preference will be for the latter. This isn't always a case of greed (though sometimes it is exactly that) rather it is based on the assumption that more is better. With computers and computer logic, more is often better. It's a concept we have grown accustomed to, and it's unlikely to change anytime soon.

But what's actually going on with all those CPUs and applications? Today's monitoring tools offer a detailed look behind the scenes and can, in fact, tell us a different story than that of Windows monitoring.

While our operating systems have CPU multi-threading abilities, many of the applications we use run just as well on two CPUs as they do on 16 CPUs. Traditional computer logic would suggest more CPUs would yield better performance, but software has to be written to take advantage of these multi-threads. A large, distributed multi-threaded application takes time to code, and thus may be too costly for many developers. The limitation of speed in modern CPUs and the continued spread of cores can leave the application with a lot of cores being underutilized.

Memory is slightly different than CPU in that applications will take more available memory if allowed to, regardless of whether or not it is needed. This caching of memory is done so the application has access to it if needed, which makes sense until you look at what is actively being used.

Before virtualization, it was difficult to see what memory was active in an operating system or application. With the hardware virtualized, it becomes possible to see memory that's allocated, cached and active. This additional insight can show that, while memory is relied upon, the actual usage can vary from 90% of what is allocated in use to as little as 40% -- depending on the application, operating system and configuration. All of these variables mean two servers are rarely identical. It was virtualization that helped bring this knowledge to light.

Usage data shows that the more-is-always-better premise is not necessarily true. Throwing hardware (or virtual hardware) at servers will not yield better results. Tools available today, such as vFoglight, vCOPS and others, reveal the true usage of the VM and exactly how much of our infrastructure is being wasted.

The myth of the free VM

The second type of sprawl found in a virtual environment involves the sheer numbers of VMs being used to support particular applications.

IT administrators and architects once had a built-in safety mechanism when it came to designing new environments: cost. Real budgets came into play when configuring hardware, and this constraint required IT to create functional and cost-effective designs. Virtualization eliminates these hurdles, clearing the way for this second form of VM sprawl.

"But a VM is free." This tired argument is based on the concept that, since a virtual infrastructure already exists, allocating a few VMs shouldn't cost anything. After all, only a "tiny piece" of the total resources is involved. The logic is that it wouldn't make sense to require a single VM requester to pay for the entire infrastructure. The virtual infrastructure is part of the overhead, and it should not be charged to a specific department. The deeper concern is that when something is regarded as free (or otherwise included) it is often overused and eventually abused.

The "free VM" argument has been a challenge to contend with from day one. Chargeback and showback tools were supposed to help solve this problem by showing users and management exactly how resources are being putting to work. Part of the challenge with these tools is the cost and the complexity involved in the initial setup.

Another impediment is the resulting data. Chargeback tools can show you, for example, which group has servers consuming four vCPUs, 16 GB of memory and 2 TB of disk compared to a group with one vCPU, 4 GB of memory and 500 GB of disk. By assigning a monetary value to the resources, you can find that one group is using more resources than another. But what can you do with that information, and will it really help in containing VM sprawl?

Many companies do not have internal chargeback abilities, and trying to allocate a piece of a shared environment is challenging at best. It is not the same as simply charging a department for a new laptop or tablet -- you cannot put an asset tag on a virtual server. While it's possible to break down the cost of compute, networking and storage for a customized VM, it is an overwhelming task without using dedicated provisioning/chargeback systems and having an internal structure that supports the chargeback model.

Creating a workable chargeback ecosystem is difficult enough -- so much so that many organizations simply don't bother. Consequently, the infrastructure remains a shared, corporate resource and requests to use it can run wild.

Next Steps

What you need to know about VM sprawl

Tips for controlling and preventing VM sprawl

This was last published in February 2015



Find more PRO+ content and other member only offers, here.

Essential Guide

Cut data center sprawl to improve IT capacity

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you combat VM sprawl?
Tackling VM sprawl involves carrying out audit on VMs so as to ensure they are mapped to a hypervisor cluster. We implement good naming standards to help in tracking down VM owners when necessary. It is important to implement data polices using thin provisioning and carrying out VM archiving for future reference or use. It is also relevant to implement lifecycle management tools for VM as a means of keeping all data centralized.
VM sprawl infrastructure should be treated differently. Infrastructure is important, but we have to remember applications because flexibility is key, and scalability can be problematic.