In this tip, you'll learn the details and differences among server virtualization, operating system (OS) virtualization,...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
hosted virtualization and bare-metal virtualization. You'll also get a glimpse of the new virtualization technology called hybrid virtualization, and learn how Microsoft's plans for 2008 will affect the virtualization space.
Seven years have passed since VMware released ESX, and although VMware pioneered x86 server virtualization, they are no longer the only settlers headed toward the virtualized west. There are several caravans full of talented vendors creating their own brands of virtualization. To help you sort through the various offerings, this article reviews the four types of virtualization architectures currently in use on the market today and suggests what direction these models may take in the years ahead.
The first type of virtualization is what most users are most familiar with - hosted virtualization. All of the desktop virtualization products, such as VMware Workstation, VMware Fusion, and Parallels Desktop for the Mac, implement hosted virtualization architecture.
As you can see, the hosted virtualization approach relies on having an existing operating system (OS) in place. The hypervisor sits on top of the OS, and the virtual machines (VM) are managed by the hypervisor.
There are many benefits to this type of virtualization. Users can install a virtualization product onto their desktop just as any other application, and continue to use their desktop OS. Hosted virtualization products also take advantage of the host OS's device drivers resulting in the virtualization product supporting whatever hardware the host does.
Hear Andrew Kutz speak at one of our virtualization seminars.
However, hosted virtualization also has its downsides. Notice that there is a memory manager and central processing unit (CPU) scheduler in both the hypervisor and the host OS. This arrangement creates a large amount of overhead. The reason this approach was taken was due to necessity since when hosted virtualization products were created prior to hardware virtualization extensions.
Hosted virtualization products are still going strong today (as evidenced by VMware Workstation 6.0), but for how long this trend continues is unknown. The fact is that the fourth type of virtualization architecture, hybrid, has the ability to offer all of the advantages of hosted virtualization without any of the overhead.
Only time will tell if companies like VMware, Microsoft, and Parallels evolve their hosted products to use a hybrid model.
The second virtualization architecture is the current enterprise data center leader -- bare-metal virtualization. VMware ESX is easily the market leader in enterprise virtualization at the moment, and it utilizes bare-metal virtualization architecture.
Immediately apparent about the above architecture is the lack of an existing OS; the hypervisor sits directly on top of the hardware -- hence the term "bare-metal virtualization." The reason so many data centers implement bare-metal products, such as ESX and Xen, is because of the speed it provides due to the decreased overhead from the OS that hosted virtualization uses.
Some readers may be wandering why I have categorized ESX and Xen together, after all, aren't they utilizing different architectures?
Yes and no. Enter the difference between full- virtualization and para-virtualization. Full-virtualization is where the VM's guest OS has no idea it is being virtualized while para- virtualization requires the VM's guest OS be modified in order to be virtualized. ESX has traditionally used full-virtualization while Xen pioneered para-virtualization. In truth both of these forms are still bare-metal virtualization and both forms are used by ESX and Xen today. So for the purposes of this article full and para- virtualization are both categorized under the auspice of bare-metal virtualization.
There are some downsides to using bare-metal virtualization. Typically the vendor publishes a hardware compatibility list (HCL) that dictates what hardware can be used with their virtualization product. This is because in order to keep the hypervisor as slim as possible, the number of device drivers in the hypervisor kernel is kept to a minimum. Some hypervisors have work-a-rounds to this, such as Xen and driver domains, but these are not for the faint of heart.
The aspect of bare-metal virtualization that makes it so appealing for data center use is not its performance in my opinion, but the fact that products that implement it are distributed as appliances or server OSes. Take VMware ESX or XenServer for example: you simply boot the server with an installation CD-ROM and it installs on the hard drive without the fuss or muss of messing with an existing OS. Embedded hypervisors are great examples of virtualization appliances. Turn the server on and it configures itself for your virtualization infrastructure. However, none of these features of bare-metal virtualization are derived from the architecture itself, which is why bare-metal virtualization may face serious competition this coming year from the fourth architecture in this list.
Operating system virtualization
OS virtualization has been making waves lately because Microsoft is rumored to be in the market for an OS virtualization technology. The most well-known products that use OS virtualization are Parallels Virtuozzo and Solaris Containers.
OS virtualization has a very low overhead despite an existing OS because it does not utilize a traditional hypervisor to manage VMs. Instead, the OS virtualization model divides a single OS into containers and uses a container manager to facilitate management. This virtualization architecture has many benefits, speedy performance being the foremost. Another benefit is reduced disk space requirements. Many containers can use the same files, resulting in lowered disk space requirements.
The big caveat with OS virtualization is the OS requirement. Container OSes must be the same OS as the host OS. This means that if you are utilizing Solaris containers then all containers must run Solaris, or if you are implementing Virtuozzo containers on Windows 2003 Standard Edition then all its containers must also be running Windows 2003 Standard Edition.
For some people the container OS requirement is too much of a no-go, but many other IT administrators see OS virtualization as the perfect architecture for implementing virtual desktops and Web servers since those platforms share many common files. However, much like the preceding two architectures, OS virtualization may soon see its proponents jumping ship to a hybrid model.
I have been deferring to this architecture for the duration of this article, and now I will explain why.
The hybrid model uses a host OS like hosted virtualization, but instead of laying a hypervisor on top of the host OS, a Kernel-level driver is inserted into the host OS kernel. This driver acts as a virtual hardware manager (VHM), coordinating hardware access between the VMs and the host OS. As you can see, the hybrid model relies on the memory manager and CPU scheduler of the existing Kernel. As with a bare-metal and containerized architecture, the absence of redundant memory managers and CPU schedulers increases the performance capabilities of this model. Yet unlike OS virtualization, the hybrid model does not have the restriction of only being to create guests with the same OS type as the host.
Hybrid virtualization offers all of the benefits of the aforementioned architectures and hardly any drawbacks, yet some negative aspects do exist. The hybrid model requires the underlying processor have virtualization extensions (such as Intel-VT and AMD-V) to function. This means that older hardware that could otherwise be utilized by other virtualization architectural models is useless to hybrid products. And while some people see the reuse of the existing kernels' memory managers and CPU schedulers as a good thing, some industry analysts assert that relying on an uncontrolled entity such as a third-party kernel is not a good thing. It puts the future of the VHM in the hands of the kernel it is loaded into, because remember, despite all assertions to the contrary, in a hybrid architecture the HVM is *not* a hypervisor. For example, many people think that KVM is a hypervisor, and this is simply not the case.
Virtualization in 2008
So what's in store for 2008 and server virtualization? Microsoft will acquire an OS virtualization technology in order to expand their portfolio, but internally they will likely be working on creating a hybrid model with their NT kernel. Windows 7 will likely ship with a HVM that allows the easy creation of VMs using Microsoft's yet-to-be- announced built-in VM manager. On the other hand, virtualization product vendors that do not have access to a Kernel's source code will be forced to continue to release hosted products. This will give Microsoft an edge in terms of VM performance.
OS virtualization will eventually disappear as the hybrid model replaces it. Disk space is incredibly inexpensive, so the remaining benefit of OS virtualization over the hybrid architecture will not be enough of a reason to not move toward hybrid architecture which has the performance of an OS virtualization model and removes the single OS drawback. Bare-metal virtualization products will continue to thrive due to the sheer cost investment that vendors have put into them. However, hybrid model-based appliances will begin to appear that become very inexpensive alternatives to the more expensive bare-metal competitors.
Andrew Kutz is an avid fan of .NET, Open Source, Terminal Services, coding and comics. He is a Microsoft Certified Solutions Developer (MCSD), a SANS/GIAC Certified Windows Security Administrator (GCWN) and a VMware Certified Professional (VCP) in VI3.