Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

How the Hyper-V architecture differs from VMware ESXi

Microsoft Hyper-V and VMware ESXi are both Type 1 hypervisors, but they have key architectural differences you should know about.

There are many architectural differences in the way Microsoft's Hyper-V works when compared to VMware's ESXi, however,...

most virtualization administrators are unaware of these differences. Many administrators are also confused about the way Hyper-V operates as a hypervisor on the host operating system.

A common misconception about Microsoft Hyper-V is that since a Windows OS is required to install Hyper-V, it operates on the host operating system rather than directly on the hardware. It is important to note that once the Hyper-V role is enabled through the Server Manager, the hypervisor code is actually configured to start in the Windows kernel space. Components running in the kernel space always have direct access to the hardware, and the same applies to Hyper-V. On the other hand, VMware's ESXi uses a completely a different approach. The ESXi hypervisor ships as a standalone ISO file.

Both Hyper-V and ESXi are Type 1 hypervisors. A Type 1 hypervisor runs directly on top of the hardware and can be further classified into one of two hypervisor designs; microkernelized and monolithic. A microkernelized design is slightly different from a monolithic design. The only design differences between these two are the location of device drivers and the controlling function.

In a monolithic design the drivers are included as part of the hypervisor

As you can see, in a monolithic design, the drivers are included as part of the hypervisor. VMware ESXi uses a monolithic design to implement all of its virtualization functions, including virtualizing device drivers. VMware has been using the monolithic design since its first virtualization product. Since device drivers are included in the hypervisor, the VMs running on the ESXi host can directly communicate with the physical hardware with the help of hypervisor code by eliminating the need of an intermediary device.

In the microkernelized design, which is used by the Microsoft Hyper-V architecture, the hypervisor code runs without the device drivers.

In a microkernelized design the hypervisor code is running without the device drivers

As shown in the microkernelized design above, the device drivers are installed in the host OS. Requests from VMs for accessing the hardware devices are honored by the OS. In other words, the host OS controls the access to the hardware. There are two types of device drivers operating in the host OS: synthetic and emulated. Synthetic device drivers are faster than emulated device drivers. A VM can access synthetic device drivers only if Hyper-V Integration Services have been installed in the VM. Integration Services implement the VMBus/VSC design in the VM, which enables direct access to the hardware. For example, to access the physical network adapter, the Network VSC driver running in the VM talks to the Network VSP driver running in the host OS. The communication between the Network VSC and Network VSP takes place over the VMBus. The Network VSP driver uses the virtualized synthetic device driver library to communicate directly with the physical network adapter. The VMBus, which is running in the host OS, is actually operating in the kernel space to improve the communication between the VMs and hardware. In case a VM does not implement the VMBus/VSC design, it relies on the device emulation.

Whichever design a virtualization vendor chooses, there has to be a controlling function that controls all aspects of the hypervisor. The controlling function helps create the virtualization environment. Microsoft Hyper-V architecture implements the controlling function in its Windows OS. In other words, the host OS controls the hypervisor, which is running directly on top of the hardware. In VMware ESXi, the controlling function is implemented within the ESXi kernel.

It is difficult to say which design is better. However, there are a few advantages and disadvantages associated with each of them. Since the device drivers are encoded as part of the ESXi kernel, ESXi can be installed only on supported hardware. Microsoft Hyper-V architecture, on the other hand, removes this requirement and allows hypervisor code to run on any hardware, which, in turn, reduces the overhead for maintaining the device driver library. Another advantage of using the microkernelized design is that there is no need to install individual device drivers on each VM. Undoubtedly, ESXi also implements virtualization components that have direct access to the hardware, but you don't have the ability to add any other roles or services. Although it is not recommended to install any other roles and features on a system that functions as a hypervisor, a host running Hyper-V can also be configured to run other roles, such as DNS and failover clustering.

This was last published in October 2014

Dig Deeper on Virtualization vendor comparisons

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

ESXi is NOT a Linux kernel.
Cancel
Hyper V Server cannot use onboard Intel Lan Drivers-- the drivers "advantage" in this article is patently bogus.
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close