With all the virtualization vendors, acquisitions and products, keeping track of today’s hypervisor market may seem daunting. Only a few years ago, the concept of virtualization was little more than a technological curiosity, but a recent TechTarget survey revealed that at least 60% of organizations are now deploying some form of server virtualization.
Let’s examine the players in today’s hypervisor market and the criteria to help you make the best selection for your data center.
More on hypervisor market offerings
Choosing a virtualization hypervisor: Eight factors to consider
Understanding hosted and bare-metal virtualization hypervisor types
Top 10 hypervisors: Choosing the best hypervisor technology
Virtualization players and features
Understanding the hypervisor market starts by knowing the players and their approximate share of the overall market. The top three vendors and products in the server virtualization space include VMware Inc.’s vSphere, Microsoft’s Hyper-V (part of Windows Server 2008 R2) and Citrix Systems Inc.’s XenServer. Based on a 2011 TechTarget survey, these “big three” virtualization vendors comprise more than 80% of the hypervisor market. VMware is regarded as the undisputed leader, with survey results reporting more than 70% of market share spread across several key hypervisor versions.
There are other virtualization vendors and hypervisors to choose from, including products from Oracle, Red Hat, SUSE, Sun Microsystems, Hewlett-Packard and IBM. But these other hypervisors collectively make up only a small minority of the total market. In most cases, data centers select other hypervisors when they use certain server hardware or operating systems and must ensure compatibility or support.
Hypervisor market options: Different ways to accomplish the same goals
All three top products are “bare metal” or Type 1 hypervisors, which have direct access to the server’s hardware resources. This allows more virtual machines to reside on each server while supporting better performance of each virtual machine (VM). In essence, operating systems run on top of the hypervisor.
By contrast, “hosted” or Type 2 hypervisors require a server operating system to load first and then start the hypervisor on top. The extra layer of software between the hypervisor and server hardware can affect performance or limit the total number of VMs on a server.
The underlying role of every hypervisor is almost identical. For example, every hypervisor must establish a segregated environment space for each VM, allocate and manage server resources to each VM, and support the operation of each VM. The ways in which hypervisors accomplish these basic goals can be very different, as are their feature sets.
What to look for in a hypervisor
It’s important to review the specific features of each hypervisor. Consider the following key factors when selecting a hypervisor:
Performance. VMs and the applications they contain must run well and access server resources efficiently. This makes a strong case for Type 1 hypervisors.
Hardware compatibility. The hypervisor has to run on your servers. Type 2 hypervisors running on Windows or Linux can be a bit more hardware-agnostic, but performance may suffer. Type 1 hypervisors must meet hardware requirements such as processors equipped with virtualization extensions.
Ease of use and management. An IT staff must have the expertise to install, configure and maintain the selected hypervisor. Major vendors with large user communities can make these processes much easier. In addition, the hypervisor must provide management features and some level of automation.
Reliability. A software bug can disrupt a large number of virtual machines, so hypervisors must be well tested and supported in order to be suitable for a production environment.
Scalability. Make sure that the hypervisor can support the size of VMs that workloads require. For example, if you need VMs with 1 TB of memory, the hypervisor must support that. This is sometimes called scaling up.
Cost. Although most vendors provide a basic hypervisor with limited features for free, advanced features and management capabilities can become quite costly. It’s important to consider the licensing costs for each hypervisor and the added cost of management tools.
For example, VMware vSphere includes features like vMotion, Distributed Resource Scheduler and storage-related migration capabilities. By comparison, Microsoft Hyper-V includes Live Migration and thin provisioning.
Virtualization expert Bill Kleyman explains that selecting from the hypervisor market comes down to support, validation and workload compatibility. “Can organizations use less-known hypervisors such as Proxmox? Of course,” he said. “But they run the risk of seeing their virtualization platform unsupported by major application and even hardware vendors.”
The availability of support and compatibility almost always favors the top vendors with well-established and validated designs. Kleyman says the support issue can easily offset any cost considerations. Lesser-known products might be less expensive, but the need to design, test, evaluate and support a data center with these products can prove more costly in the long run than using those from hypervisor market leaders.
“Open source and less-known vendors will always have a place in the industry for organizations that have the time to dedicate to the hypervisor configuration, troubleshooting and testing,” Kleyman said.
Why you should evaluate the hypervisor market
The principal reason for virtualization is consolidation, or doing more with less. Perhaps the most recognizable example is server consolidation, which allows multiple workloads to run on the same physical host server. Server consolidation vastly increases the utilization of server computing resources and reduces the total physical server count, resulting in lower capital expenses and reduced power and cooling costs. TechTarget survey results reveal that 59% of IT professionals are using virtualization to consolidate servers.
Storage consolidation is another important benefit, identifying storage from disparate arrays and aggregating the unused storage into a pool that can be provisioned without regard for physical storage locations. Network consolidation allows a physical LAN to be partitioned into multiple logical networks, and multiple physical networks can be aggregated into a single logical LAN, all benefitting security and traffic control.
Even at the user’s endpoint, application virtualization and virtual desktop infrastructure (VDI) products have fundamentally changed application delivery by locating desktop resources on central servers, where administrators can control and secure desktops more completely. According to survey data, almost 20% of IT respondents are deploying or expanding endpoint virtualization.
Consolidation can lead to better systems management. Rather than managing 50 workloads across 50 or more servers, for example, an administrator can use software tools to monitor and optimize the behavior of 50 virtualized workloads across 10 or even five servers. “Products like up.time or Microsoft's SCCM [System Center Configuration Manager] 2012 specifically try to augment and simplify the virtual infrastructure management process,” Kleyman said.
Management improvements also make the data center more agile, allowing new workloads to be established and configured in a fraction of the time needed to justify, order, receive and deploy traditional physical hardware. At least 28% of survey respondents use virtualization for resource allocation, while another 20% use virtualization to maintain “golden images” of virtual machines.
This agility directly translates to mission-critical activities like disaster recovery, and survey data reports that 43% of IT professionals are using virtualization for disaster recovery and availability. Since virtual machines are easily protected and replicated, it’s far simpler to protect VMs and replicate virtual environments to remote data center facilities. Administrators can then run the replicated virtual instances in the remote site if the need arises or recover the replicated instances to their original locations.
Why companies forgo virtualization
Although the vast majority of IT organizations is moving quickly to implement and expand the use of virtualization, it’s important to note that there are reasons to avoid the technology. According to survey responses, the small number of professionals who chose to avoid virtualization simply weren’t able to adopt it because of practical considerations.
Some of TechTarget’s survey respondents felt that there were not enough servers or endpoints to justify the effort, growth was not a problem, current applications were suited to virtualization, or there was no executive approval or money in the budget.
“There are also still a lot of companies that wrongly fear a virtualized production environment, usually from a stability point of view,” said Chris Steffen, principal technical architect at Kroll Factual Data. “And there are still some businesses with antiquated processes and applications that won’t run in a virtualized environment.”
But the arguments against virtualization are getting more difficult to make. Kleyman explains that major software and database vendors are now designing their application packages to be virtualization-ready, and vendors have considered virtualization best practices as part of their software design.
“This picture was completely different just five years ago, when many organizations forbade moving their software to a virtual platform,” he said. “Now, this practice is welcomed.” This change of heart is mainly due to the maturation of the products, which are more stable and perform much better with modern server hardware.