Server uptime and hardware failure guide
A comprehensive collection of articles, videos and more, hand-picked by our editors
Over the last couple of years, I have read articles that have stated that organizations should now be fully virtualized....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The reasoning behind these articles was that virtualization is a mature technology and it is now possible to virtualize pretty much any workload, including those that are very resource intensive. Some of the articles also made the argument that virtualization is a stepping stone in the transition to public cloud environments. In spite of what these articles might say however, there are still some workloads that should remain on physical hardware. In this article, I want to talk about some of those types of workloads and whether it makes sense to virtualize or not.
Too big to fail
As previously mentioned, server virtualization has matured to the point that even very large, resource intensive workloads can safely be virtualized. The problem with virtualizing these types of workloads however, is fault tolerance.
Imagine for a moment that your organization runs a mission-critical, extremely resource intensive database application that is hosted on physical hardware. Chances are the application is clustered in a way that makes it resilient to a server level failure.
Whether you virtualize or not, it would obviously still be possible to protect the workload using failover clustering. You could create a Guest cluster within the virtual server environment, or you could use host-level clustering to automatically live migrate the virtual machine (VM) to a different virtualization host in the event of a host server failure. The problem with this however, is resource consumption.
The whole premise of server virtualization is that VMs share a pool of physical hardware resources. Extremely heavy workloads might consume so many resources that it is almost impossible for them to fail over to another host if any other workloads are running on the host. For right now, it's probably more practical to keep that type of workload running on physical hardware unless you have a pressing business need to virtualize the workload (such as plans for an eventual cloud migration).
Resource intensive workloads
In the previous section I discussed extremely resource intensive workloads from a failover clustering standpoint. However, there may be logistical issues which prevent you from virtualizing some large-scale workloads. Hypervisors such as VMware ESXi and Microsoft Hyper-V limit the scale of VMs. For example, there are limits to the number of virtual CPUs and to the amount of memory that can be assigned to the VM. Admittedly, it takes an extremely large VM to exceed the limits, but the limits are real and you may occasionally run up against the limits if the workload you are considering virtualizing is large enough.
When deciding whether to virtualize or not, you should also consider a workload's dependency on physical hardware. Hardware dependency can come in a variety of different forms. For instance, I recently saw an application that was hard coded to use a very specific host bus adapter. This dependency would prevent this particular application from working correctly on a virtual server.
Another form of hardware dependency you may occasionally encounter has to do with copyright enforcement. There are applications that check for the presence of a USB flash device or examine a processor's serial number in order to prevent the application from being illegally copied. Servers running applications that use physical hardware as the basis of a copy protection mechanism are typically poor candidates for virtualization.
Obscure or unsupported OSes
You might also find it to be impractical to virtualize servers that are running in obscure, outdated, or otherwise unsupported operating systems (OSes). Not only are such OSes unsupported by the hypervisor vendor, but components such as the VMware Tools and the Hyper-V Integration Services are only designed to work with specific OSes.
There are actually two different schools of thought when it comes to virtualizing servers that are running outdated OSes. One school of thought suggests that you should never run an unsupported OS on a hypervisor. The other school of thought suggests that going ahead and virtualizing the server eliminates its dependency on severely outdated physical hardware.
I once virtualized a server that was running Windows NT, even though Windows NT was not officially supported by the hypervisor vendor. Although the virtualization process proved to be more difficult than I was expecting, it did work and the organization was finally able to retire the ancient hardware on which the server had been running.
Dependency on physical storage
One last reason why you may want to avoid virtualizing certain workloads is that some workloads have dependency upon physical storage. In all fairness, Hyper-V and VMware both have a way of attaching a VM to a physical disk. In Hyper-V for example, the physical storage is treated as an iSCSI pass-through disk.
Although the use of pass-through disks is fully supported by the hypervisor vendors, using it can complicate the backup process. Most of the Hyper-V backup applications that I have seen do not support backing up pass-through storage if the backup is created at the host level.
In my opinion, there are some workloads that simply should not be virtualized. Keep in mind however, that technology changes and just because a workload is not suitable for virtualization today does not mean you may not be able to virtualize that workload in a year or two.
Which servers make good candidates for virtualization?
Should you virtualize a domain controller?
Benefits of virtualizing mission critical workloads
Virtualize workloads with virtual SAN and virtual Fibre Channel.