Raleigh, N.C.-based Red Hat Inc. announced a new Linux-based bare-metal hypervisor during Red Hat Summit in Boston...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
last week, but how does it compare with the open source Xen hypervisor that Red Hat also includes in Red Hat Enterprise Linux (RHEL) ?
The hypervisor technology is based on the Kernel Based Virtual Machine (KVM) project, whose kernel component was included in mainline Linux as of version 2.6.20 and is available on every Linux distribution, said Brian Stevens, Red Hat's CTO and VP of engineering.
It can be used to host RHEL and Windows-based environments, provided the systems' CPUs have virtualization-assist features (AMD-V and Intel-VT) to accelerate virtualized systems.
But according to those engaged in the newfound KVM-versus-XenSource debate, the ease of KVM deployment on Linux may come at some cost. While Red Hat's new hypervisor may be easier to install alongside Linux, Xen may ultimately provide a sounder virtualization offering because its paravirtualized model separates the hypervisor from the operating system. Not only is this separation the prevailing model of leading virtualization players, but it facilitates some of the central tasks necessitated by a virtual environment, such as scheduling the necessary resources for virtual machines. The upshot: A Linux-based hypervisor, said KVM detractors, may not provide the ease of use that Red Hat touts.The KVM nitty-gritty
According to the KVM wiki, the KVM hypervisor consists of two modules: a loadable kernel module called kvm.ko that provides the core virtualization functionality; and a processor-specific module, kvm-intel.ko or kvm-amd.ko. Weighing in at 64 MB hypervisor can be installed using a USB key that is booted onto a system along with Linux.
KVM is not a true hypervisor by definition; it is a hosted virtualization product like Microsoft Virtual PC or VMware Server, said Xen leader and chief architect Ian Pratt, who discussed the KVM-based product during the Xen Summit in Boston on June 24.
That might suggest that KVM is better suited for tactical deployments of virtual machines. "KVM is easier to deal with, in that it is an extension of Linux. If your view of virtualization is simply as an install of Linux and you want to run other instances of Linux on your machine, then you can use KVM to do that," Pratt said.
By comparison, the Xen hypervisor is a paravirtualizing hypervisor and only virtualizes the base platform: that is, CPU, memory management units and memory, and low-level interrupts. Unlike KVM, Xen does not use device drivers, according to Simon Crosby, CTO of XenSource. The Xen hypervisor's virtualization layer sits between the hardware and the Dom0 kernel, where the hosted guest machines run.
In contrast, KVM makes the Linux kernel itself into a hypervisor, so that the guests have direct access to the hardware.
The Xen hypervisor also supports a broader array of processors than KVM. Besides x86 and x86-64, Xen can also run on top of IA-32, IA-64 and PowerPC 970.
Also, since the KVM hypervisor is part of the Linux kernel, it uses the regular Linux scheduler and memory management. But, the KVM hypervisor does not know that virtual machines (VMs) are running on it, said Simon Crosby, CTO of XenSource. This can cause problems when it comes to scheduling tasks for VMs; if the hypervisor doesn't know the VMs exist, it does not schedule resources for them, he said.
In contrast, Xen is an external hypervisor that controls the host server and can schedule resources as needed by guests, Crosby said.
Another potential issue is that every time a new Linux distribution is released, the Linux-based application stack has to be reconfigured; this could create issues for the VMs running on KVM hypervisor, Crosby said.
Generally speaking, Red Hat's choice of KVM is contrary to the thinking of the leading virtualization players, including Citrix and VMware, who insist that keeping the operating system separate from the hypervisor is best.
"At the end of the day, the virtualization industry has decided that the best way to go is to separate the hypervisor from the operating system so that it is guest independent. KVM's model is, 'I want to use Linux to virtualize my other guests,' but that is not what [the industry] wants to do. People want to virtualize their infrastructure, and you can do that with Xen," Crosby said.Red Hat defends KVM choice
A central benefit of KVM is that users do not have to re-boot systems to run a virtual machine guest. "When your Linux kernel is up, to run a guest with Xen you have to reboot. With KVM, it is just a command you type in, no rebooting required. The image booted up through the USB, which can be booted on the server, or on the network," said Stephen Tweedie, consulting engineer with Red Hat.
But when users at Red Hat Summit asked about performance of KVM compared to XenSource, the answer was unclear. "It is a complex equation, depending on your hardware and applications. It will take time to determine where the pitfalls are," Tweedie said.
But as a beta, KVM is a work in progress whose shortcomings can be addressed over time, Red Hat engineers said during a panel discussion at the Red Hat Summit last week.
The Red Hat engineers also put some compatibility concerns to rest by guaranteeing that KVM will be compatible with all of the company's future operating systems and management tools. If there are issues, users will be supported.