Licensing, performance and storage: The hidden costs of server virtualization, part three

To virtualize or not to virtualize? Before deciding, take a close look at this technology's hidden costs.

In part one, we discussed the power and heat costs and the management concerns of virtualization. Part two discussed networking issues and the problem of virtual machine sprawl. Finally, in part three, we will conclude by discussing licensing, performance and storage.


Maintaining software licensing compliance is still another challenge. An administrator has a number of options in order to help maintain control over software licensing. But what happens to that control in a virtualized world? Physical restrictions are less of a deterrent in a virtual world with the use of ISO images and the ability to KVM into a virtual machine and control it all the way up front from the POST screen.

But perhaps the biggest problem facing IT administrators is the virtualization platform's ability to clone and replicate virtual machines. Without a method to control the mass duplication and deployment process of virtual machines, an administrator will have a license compliance issue nightmare on his hands.

Within a template or master image of a virtual machine lies a guest operating system as well as any pre-installed software applications, any or all of which may contain a software license and therefore a license restriction. An administrator needs the ability to keep track of how many instances of each image are deployed. If you thought maintaining control over licensing was a chore before virtualization, the problem has just been magnified.


One driving factor behind server virtualization is the need to increase the performance output of an under-utilized server. While it is true that server virtualization can more efficiently and effectively utilize the processing capacity of the server, what tends to get overlooked is the added stress that virtualization incurs on other physical resources. The disk subsystem may become an even bigger bottleneck to a system once the server consolidation process is completed.

The server's processor operates on an order of magnitude faster than that of a hard disk. So while server consolidation makes better use of the physical server's processor(s), it in turn slows down the performance of the disk I/O. As the number of virtual machines on a physical server increases, so does the number of guest operating systems that are each generating any number of disk I/O requests.

This exponential increase in disk I/O requests further agitates an already bottlenecked server component. It is important to remember that as one virtual machine increases its usage of the shared disk subsystem, it effectively slows down the performance of any and all other virtual machines sharing that environment.

Disk I/O bottlenecks and performance problems can also be caused by disk fragmentation. Virtualization can exacerbate the fragmentation problem. Creating a fixed-sized virtual hard disk file (where the entire file size is created at once) will help alleviate part of the disk fragmentation problem.

But the more popular choice of virtual hard disk file seems to be a dynamically expanding disk where the file is created but starts off small and only grows in size as data is created within the virtual machine. This may save valuable disk space up front, but it can cause huge performance problems down the line. As data is written and removed within the virtual machines, data is being sporadically changed on the physical file system, causing it to become more fragmented over time. As the fragmentation problem worsens, so does the disk I/O bottleneck problem and the performance of each virtual machine that resides on that physical storage device.

Virtualization also magnifies the need of redundant components within the server. A single physical component failure now affects the performance or uptime of numerous environments, rather than just a single server. Likewise, using virtualization to consolidate machines requires careful planning, sizing, and proper configuration.

And just like a component failure, improper sizing or faulty configuration of your virtualization platform can negatively affect performance or uptime of numerous virtual machines rather than just a single instance.


Storage issues become more challenging and will typically get elevated on an administrator's priority list. In a small virtual machine environment, storing the virtual hard disk files or virtual machine images locally on the server might initially appear to be a good solution, but as the environment grows so do the problems associated with local disk storage.

An obvious limitation of using local disk storage is the fact that many servers are not equipped with enough disk space to store several large virtual machine images. Virtual machine images do vary in size, but a single virtual hard disk file may reach beyond 100GB in size. When several of these types of virtual machines are consolidated on the same physical server, free disk space is quickly consumed.

And as previously explained, if virtual machines are allowed to be mass replicated without any type of monitoring or controls put in place, local storage will be consumed just as quickly as someone could issue a simple copy command.

The next step is to move up to a network storage solution such as a SAN or NAS-based solution, both for performance reasons and for easier image management. The downside to purchasing such a solution is the high price tag, not only for the actual device but also for the expertise required to operate it.

Combining virtualization with a network storage solution isn't as easy as it sounds. It is a very tricky proposition to scope out the right storage solution, and it's even more difficult to predict just how many virtual machines can be operated at the same time while maintaining acceptable I/O throughput and providing a reasonable end-user experience.

In evaluating the size or capacity of the storage solution versus the performance realized, most organizations take the "start off small" approach and purchase a 50TB or smaller SAN solution. These same organizations may come to realize that they underestimated their storage needs, and then they need to upgrade beyond the 100TB-sized solution. The added expense of purchasing a second, higher-end SAN solution and the time involved to migrate all of the data are two major pain points that most sprawling virtualization environments could face.

And like other devices, storage components are not exempt from the virtualization/component failure problem. When using either local or network-based storage, any outage or failure of the storage device will now affect any number of virtual machines that rely on the device for accessing the virtual machine image files. A single storage component failure has the potential of halting tens or hundreds of virtual machines rather than just a single physical server.

About the authors: David Marshall is a senior member of the reference architect team at Surgient, Inc., and he specializes in server virtualization, virtualization applications and Windows administration. He also runs the InfoWorld Virtualization Report, as well as the virtualization news blog, David is also a co-author of Advanced Server Virtualization: VMware and Microsoft Platforms in the Virtual Data Center, a book that details years of hands on experience using and implementing server virtualization solutions.

Dan Knezevic is a senior network engineer and a team lead for the data center operations team at Surgient Inc, providing expertise in the data center network and server infrastructure as well as virtualization platforms. He also specializes in network security and enterprise storage solutions. He brings six years of virtualization integration experience in the data center environment.

Dig Deeper on Reducing IT costs with server virtualization

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.