The host-based virtualization products offered by VMware, Microsoft Virtual Server, XenSource and SWsoft have many advantages, but neither of them is a one-size-fits-all solution, and sometimes new users launch consolidation products with unrealistic expectations.
Since you're used to hearing about how great host-based virtualization is, let's look at some of the areas where host virtualization sometimes fails to meet user expectations.
Here are five of the myths that bring unwelcome surprises during virtualization-based consolidation projects.
Myth #1: Consolidating to virtual machines reduces the number of systems on the network
Consolidating to virtual machines, in many cases, will actually increase the number of managed systems on your network. Although the number of physical systems will certainly be reduced, the number of logical systems often increases.
For example, supposed you decide to virtualize 10 servers and run them on one box. You create the VMs and use a P2V tool to migrate the physical systems to virtual machines.
Then, you have ten virtual machines running on top of another host operating system. Depending on your setup, this can leave you with 11 managed systems. (Of course, if the VMs are truly running on the bare metal, then your consolidated 10 servers will result in the management of 10 logical servers. )
The ultimate savings is in hardware costs, maintenance and power. The logical aspects of server, application and operating system maintenance will remain the same.
Other solutions, such a PolyServe shared data cluster, for example, may reduce the number of both logical and physical servers. Of course, clustering is ideal for specific applications such as file and database services. Ultimately, the role of the systems being consolidated will likely drive the technology choice.
Myth #2: Consolidation reduces hardware costs while providing more efficient performance
Oftentimes, server consolidation to VMs allows you to make better use of underutilized CPUs, which in turn does offer performance efficiency. But the virtual hardware abstraction within the VM engine will induce some latency as well. For example, VMware's last published benchmark on ESX latency found an average I/O latency of 13%.
For many organizations, the small amount of added latency is worth the hardware cost, hardware management and power savings realized by virtualizing production server resources.
Myth #3: Database servers should never be virtualized.
With an abundance of clustering solutions available for database applications, few have looked to port database servers to VMs, usually because of performance and I/O concerns. But some organizations are bucking this trend and seeing value in virtualizing database servers.
One major value of virtualization has long been portability, because you can recover a virtual machine on another host system if its primary host fails. One organization that sees this benefit is Arvato Mobile, which has hundreds of virtualized servers in production, including some database servers.
The folks at Arvato swear by their virtualization engine -- SWsoft's Virtuozzo, which allows them to virtualize high performance Linux applications and servers without worrying about performance loss. At the recent TechTarget Server Virtualization seminar in New York, a representative from Arvato was happy to stand up and tout the success that Arvato has had in virtualizing database servers.
So the story here is that while maybe not all database servers should be ported directly to VMs, in some instances real benefits such as VM portability and quick recovery can be achieved. For more on the Arvato Mobile experience with SWsoft, take a look at the Arvato Mobile case study.
Myth #4: Virtualized server existence is transparent to the network.
Although connected clients cannot distinguish between virtual and production servers, transparency for connecting virtual networks to physical network infrastructure devices may be another story.
For example, Microsoft Virtual Server 2005 R2 virtual networks do not support the VLAN ID in a tag header. This makes the virtual switch on a Virtual Server host a vanilla unmanaged layer 2 switch. You can read more about the virtual hardware features of Microsoft Virtual Server in the TechNet article Emulated hardware.
VMware ESX Server 3, on the other hand, does offer support for IEEE 802.1Q VLAN Trunking, so this platform would allow you to integrate your VM host's virtual switching with your existing VLANs. You can read more about how ESX Server 3 integrates with VLANs in the article VMware ESX Server 802.1Q VLAN solutions.
If you are planning to roll out a host-based virtualization solution, odds are that members of your network infrastructure team will be interested in how to integrate your virtual network switches on a virtual machine host system with the existing VLANs on the company network.
So, when evaluating virtualization product vendors, determining how the virtual machines can integrate with your existing VLANs should be an important consideration.
Myth #5: Since the virtualization application supports USB, my USB devices will work with each virtual machine.
Supporting a specific hardware technology such as USB is one thing, and supporting each individual device that connects to that bus is another.
Even if a VM application supports USB, you need to verify that your specific USB devices are supported. For example, Microsoft Virtual Server 2005 R2 supports USB but does not support all USB device types. On the Virtual Server 2005 R2 FAQ page, Microsoft states, "Virtual Server currently does not support USB hardware such as smart card readers and scanners. However, standard USB input hardware, such as keyboard and pointing devices, are supported."
It's not safe to assume that when a particular bus is supported that all devices that can connect to the bus will work as well. Some have found the same to hold true for VMware ESX Server's support for local or SAN-attached SCSI devices. Although most SCSI storage devices, including tape drives and libraries can be seen and attached to VMs as "Generic SCSI Devices," some libraries and drives are not recognized.
With this in mind, it is always best to test how your specific devices interoperate with a VM technology before deciding on the right solution for your needs.
As you can see, many virtualization myths are simply assumptions based on feature set lists of existing virtualization products. If you have tested how each feature integrates with your existing network infrastructure, you should be well prepared to migrate to a virtualized infrastructure without any major surprises.
About the author: Chris Wolf, MCSE, MCSE, MCT, CCNA, is a Microsoft MVP for Windows Server-File System/Storage and the Computer and Information Systems Department Head for the ECPI College of Technology online campus. He also works as an independent consultant, specializing in the areas of virtualization, enterprise storage, and network infrastructure management. Chris is the author of Virtualization: From the Desktop to the Enterprise (Apress), Troubleshooting Microsoft Technologies (Addison Wesley) and a contributor to the Windows Server 2003 Deployment Kit (Microsoft Press).