Virtualization has changed the face of modern computing; administrators can provision computing resources and operate...
workloads completely decoupled from the underlying servers. To accomplish this feat, a hypervisor is usually installed directly on top of the server's hardware, and virtual machines can then be established above the hypervisor to run a wide range of operating systems and applications. But a new virtualization model which allows one hypervisor to run within another is emerging, allowing IT professionals to mix hypervisors and develop complex virtualized environments that have not been practical before. Although the technology isn't quite ready for busy production data centers yet, interest in "nested virtualization" is growing, and vendors are demonstrating serious support. Here's what you need to know about nested virtualization today.
Which hypervisors support nested VMs, and are there any hardware requirements?
Nested virtualization -- or nested VMs -- is not a new idea. VMware discussed the issue back to 2008, and a VM created with one hypervisor should ideally work when nested inside another VM. For example, a host hypervisor like ESXi 6.0 will support guest hypervisors, including Hyper-V, Xen and KVM. However, the ability of a host hypervisor to support particular guest hypervisors should never be assumed. It's always best to start your nested virtualization research by checking with hypervisor vendors to determine which specific hypervisors are known to work as guests -- and also check the ability of the host hypervisor to support particular guests. If you cannot find documentation that supports your desired combination of host and guest hypervisors, you can still experiment in a controlled environment and benchmark the results for yourself, which is always a sound practice.
The principle issue with nested virtualization has been the potential performance impact to guest VMs, also known as nested VMs. Hypervisors like ESXi, Hyper-V, Xen and KVM all need access to the processor hardware extensions that enhance virtualization such as Intel VT-x and AMD-V along with Intel's extended page tables (EPT) and VM control structure shadowing as well as AMD's rapid virtualization indexing (RVI) technologies. This isn't a problem for modern servers since both processor extensions were added back in 2006. But once a hypervisor was installed on the server's bare-metal hardware, the host hypervisor typically didn't expose the server's virtualization features to guest hypervisors, resulting in poor guest hypervisor performance, if the nested VM launched at all.
Modern hypervisors like ESXi 5.1 and later are able to virtualize the processor and memory enhancements and make those features available to guest VMs, which can then be nested within other VMs and offer full hardware-accelerated performance.
Although current hypervisors should support nesting, remember that it may be necessary to deliberately enable virtualized hardware-assistance as a feature of the host hypervisor before nested VMs -- and guest hypervisors -- can be deployed properly. For example, ESXi 5.1, 5.5 and 6.0 all require administrators to access the processor settings screen within the Web client and check the "Expose hardware-assisted virtualization to the guest operating system" box while VMware Workstation 8 and Player 4 require administrators to check the "Virtualize Intel VT-x/EPT or AMD-V/RVI" in the processor settings screen. As another example, enabling nested virtualization in Xen may require changes to the Xen configuration file such as:
Why Hyper-V users should care about nested virtualization
Ravello nested virtualization expands to VMware environments
Setting up a home vSphere lab with a nested hypervisor
Dig Deeper on Cloud computing and virtualization strategies
Related Q&A from Stephen J. Bigelow
Get to know VMware vSphere's Admission Control tool and use it to reserve the resources necessary for VM failover with cluster resource calculations ... Continue Reading
Use heartbeats, VM monitoring and application monitoring to fully examine the causes of VM unresponsiveness. Adjust sensitivity levels to focus on ... Continue Reading
Combine Distributed Resource Scheduler and vSphere High Availability to design balanced failover clusters. Pay attention to affinity rules, which can... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.