Control virtualization chaos with VM management automation

Virtualization veteran Alessandro Perilli sounds off on why virtual machine sprawl happens and why IT shops should automate management of VMs.

From the "Addressing all phases of virtualization adoption" series

Too few IT managers evaluate a very important aspect of data center management: Automating the provisioning of virtual machines (VMs). In fact, all IT organizations would benefit from examining automated virtualization management in general. So, let's look at how this capability will become a must-have very soon, driving vendors' product development in the next few years, and at the products available today.

This tip is part of my virtualization series, Addressing all phases of virtualization adoption. In the last installment, we examined some security challenges virtualization poses, mainly talking about disaster recovery, configuring clusters and creating failover structures.

From server sprawl to VM sprawl
At first, server virtualization's ability to consolidate tents of physical servers in just one host was considered a real solution for reducing the uncontrolled proliferation of new servers. Unfortunately, early adopters experienced just the opposite result. What happened?

The good news is that they found that the cost savings of implementing a new server in a virtual data center is dramatic because provisioning now takes hours, sometimes minutes, instead of weeks or months. The only real limitations to deployment are the availability of physical resources to assign new VMs and, if Windows is used, license prices. (The latter has less impact when a large corporation has a volume licensing agreement with Microsoft.)

Suddenly, IT managers could quickly move from planning to live implementation. Yet, this ease often provided a false perception of infrastructure limits.

Since multi-tier architectures seemed less complex to build, IT directors contemplated new scenarios, such as isolation of applications for security, performance or compatibility reasons. Also, deployment of new applications for testing could be done with no hesitation.

What often happened in this scenario was that companies did not enforce strict policies. Virtual infrastructures, depending upon their dimension, presented different challenges that weren't considered.

Bigger corporations, for instance, are still trying to understand the accounting aspect of virtualization in their costs centers. They gave new resources to departments, but infrastructure administrators wouldn't determine which VMs were actually used and how at a later date.

Smaller companies without an authorization process have granted provisioning capabilities to several individuals -- even to people without deep virtualization knowledge -- in order to faster execute projects. So, within a short period, almost everybody wanting a new VM could simply assemble a VM and power it on.

In such uncontrolled provisioning environments, three things typically happen:

  1. Many who create and deploy new VMs have no understanding of the big picture, such as knowing how many VMs a physical host can really handle, how many are planned to be hosted by a single physical server, and which kind of workloads are best suited for a certain location.
  2. Every new VM deployment compromises the big picture itself, leading to performance issues and continuous rebuilding of consolidation plans.
  3. Every new VM brings a set of operating system and application licenses, which require special attention before being assigned but are not getting that attention.

So, without really realizing it's happening, companies have planted a virtual machine jungle without documentation, management of related licenses, precise roles, or even an owner. Obviously, this ad hoc approach will impact the overall health of a virtual data center.

The need for automation
When the virtual data center grows, IT managers need new ways to perform usual operations and tools that help them scale up when needed.

When handling a large number of VMs, the biggest problem is their placement. As I've said many times during this series, correct distribution of workloads is mandatory to achieve good performances with given physical resources.

Choosing the best host for a virtual machine is not easy when taking into account the host machine's free resources and already-hosted workloads. That's when capacity planning tools are highly desirable.

Managing capacity manually during everyday data center life is just overwhelming. After all, a lot of time is needed to decide placement. Also, the whole environment is almost liquid, with several machines moving from one host to another to balance resources usage for host machine maintenance or other reasons. In this scenario, the best placement becomes a relative concept.

Another remarkable problem in large virtual infrastructures is customizing VM deployment.

While virtualization technologies used in conjunction with tools like Microsoft Sysprep make it easy to create clones and distribute them with new parameters, current deployment processes don't scale well and only consider single operating systems.

In large infrastructures, business units rarely require single virtual machines, more often asking for multi-tier configurations. Every time these mini-virtual infrastructures need to be deployed, IT administrators have to manually put in place specific network topologies, access permissions, service level agreement policies and so on.

In such scenarios, it is improbable that the required VMs will need the simple customization Sysprep offers: Installation of specific applications, interconnection to existing remote services, execution of scripts able to run before and after deployment and so on. These are all operations to be performed for each virtual infrastructure -- a huge loss of time.

Finally, deployments of most virtual infrastructures represent a typical scenario where, to test several different stand-alone projects from several departments, the original project will have to be destroyed and recreated on demand. On any new provisioning, both requestors and administrators will have to remember correct settings and customizations for all tiers.

An emerging market
Considering such big risks and needs in today's virtual datacenters, it's not surprising that vendors are working hard to offer reliable, scalable automated provisioning tools.

Young start-ups -- Dunes, from Switzerland; Surgient from Austin, Tex.; VMLogix from Boston -- have to compete against current virtualization market leader VMware. That's a tall order, because VMware acquired know-how and an already available product from another young company, Akimbi, in the summer of 2006.

Akimbi Slingshot proved to be an interesting product before the acquisition, and VMware has spent a lot of time improving it further and integrating it into its ESX Server and VirtualCenter flagship solutions. This integration will be an important selling point, since it leverages the already-acquired skills of VMware customers in a familiar management environment.

On the other side, every day more IT managers look at agnostic products able to automate VM provisioning in mixed environments where the virtualization platform doesn't matter. Here, Surgient's products (VQMS/VTMS/VMDS) or VMLogix LabManager have much more appeal, able to support VMware platforms as well as Microsoft and, in a near future, Xen.

Apart from Dunes, all mentioned vendors are now focusing their products on the very first practical application of automated provisioning: Virtual lab management. It's easy to find a priority commitment on basic provisioning capabilities, like multi-tier deployments, enhanced customizations of deployed clones and physical resources scheduling. This is probably all customers feel is needed at the moment when virtual data centers have yet to reach critical masses.

In the near future, IT organizations will search for harder-to-find features, like provisioning authorization flow management or license management.

In any case, the autonomic data center is still far from here. So far, only Dunes and its Virtual Service Orchestrator (VS-O) is offering a true framework to perform full automation of today's virtual datacenters.

About the author: Alessandro Perilli is a recognized IT security and virtualization technology analyst. He is CISSP certified and is also certified in Check Point, Cisco, Citrix, CompTIA, Microsoft, and Prosoft. In 2006 he received the Microsoft Most Valuable Professional (MVP) award for security technologies. Perilli pioneered modern virtualization evangelism, and is the founder of the well-known blog virtualization.info. Alessandro Perilli is also the founder of the False Negatives project, a high quality IT security consulting and training business in Italy.


This was first published in March 2007

Dig deeper on Virtual machine provisioning and configuration

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close