KVM is an important virtualization technology that adds hypervisor capabilities to the Linux kernel, which, in...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
turn, allocates memory, provides security and schedules processes efficiently. In addition, Linux is easily stripped of unnecessary code, allowing for enormous optimization and highly efficient operation.
Through the years, KVM technology has evolved to support more processors and OSes -- even Windows guests. As Linux gains traction in mainstream data centers, administrators should have a better understanding of KVM technology and the benefits it can bring to the enterprise.
How is KVM different than other types of hypervisors?
First introduced in 2007, KVM is an open source Linux module that competes with other commercial hypervisors like VMware ESXi and Microsoft Hyper-V.
Linux employs a strong modular approach to software deployment, allowing administrators to add or remove modules to compile the fastest and most efficient codebase possible for distribution. KVM is just one of those possible modules, and it can be compiled into the Linux kernel to bring hypervisor functionality to Linux.
The open source nature of KVM -- and the entire Linux environment -- means that the kernels and modules can be modified and optimized as desired to improve performance or add other functionality. This is dramatically different than commercial hypervisor products that are delivered as opaque, monolithic -- potentially less efficient or unnecessary -- code that can't be adapted or streamlined by IT staff.
The Linux kernel handles the file system, block devices and physical drivers. KVM exposes an interface that allows an administrator to establish a KVM guest -- a KVM VM. KVM configures VM address space, delivers virtual CPUs, handles I/O streams, provides a firmware image to the guest and maps each guest's video output back to the host kernel for display. Once established, a KVM guest receives vCPUs and I/O, which is then passed to the guest OS kernel, where the guest drivers and file system can interact with the guest application -- workload. Thus, every VM becomes a regular Linux process that can operate at near-bare-metal speeds.
There is some debate over the role of KVM as a type 1 or type 2 hypervisor. The reliance of KVM technology on an underlying kernel causes KVM to be incorrectly classified as a type 2 -- hosted -- hypervisor. However, this architectural relationship doesn't accurately reflect the fact that KVM runs directly on hardware, and can use processor virtualization extensions, such as Intel VT-x and AMD-V. KVM technology simply uses the Linux kernel as a minimum OS, just as ESXi and Hyper-V require some minimum OS capabilities to function.
Today, KVM is widely recognized as a type 1 bare-metal hypervisor capable of enterprise-class performance. Examples of stand-alone KVM distributions include Red Hat Enterprise Virtualization Hypervisor.
However, the open source flexibility of KVM -- and Linux as a whole -- presents other challenges for enterprise IT staff. Organizations that choose to use Linux in production are generally hesitant to tinker with Linux builds. Pre-established Linux distributions that include KVM technology, such as Red Hat Enterprise Linux, SUSE Linux Enterprise Server, the Fedora Project and others, can eliminate the risks and costs of organizing and compiling a Linux environment for the business.
However, organizations that choose to adapt and modify the Linux environment, including KVM, will need extensive skills in coding, compiling and testing Linux builds. Consequently, Linux is well-suited to enterprise use, but it's often used for specific applications; KVM coexists with other commercial OS and hypervisor products.
Dig Deeper on Open source virtualization
Related Q&A from Stephen J. Bigelow
DR planning mistakes are easy to make. Avoid selecting a tool that doesn't meet your needs or that's overly complex, carefully consider the ...continue reading
Establishing a DR plan for a VMware environment can be overwhelming. How do you design a plan that prioritizes VMs and manage your infrastructure to ...continue reading
Storage I/O control can be an effective way to handle occasional storage sharing issues, but it is not always suitable for every virtual machine.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.