Many virtualization vendors today would tell you there are very few workloads that you cannot virtualize. Large,...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
resource-hungry workloads are no longer off limits when you can have a virtual machine with 64 virtual CPUs and 1 terabyte of memory along with direct access to the storage area network. However, just because you can virtualize something doesn't mean you should.
The purist would say, yes, there are no longer technical limits for VMware and Hyper-V. The modern hypervisor can scale to capacity far beyond what most will ever push it. In the end, you have to ask three questions to decide what not to virtualize:
- Is it cost effective?
- Is it supported by the vendor?
- Is it worth the challenge?
Virtualization cost effectiveness
One piece that businesses often overlook is the cost of adding to their virtual environment. Often you may even hear that virtual machines (VMs) are free. We know the infrastructure to support those "free" VMs isn't free. The hosts, networking and storage all cost something, but in a lot of cases, the return on investment is many times more than the initial capital spent. However, one place where the return on investment might not be as high as other things is with storage. For many virtual environments, shared storage is central to how the hypervisor vendors are able to offer live migration and failover features.
The concern is that shared storage is expensive because traditionally, it belongs in a storage area network (SAN) or network-attached storage (NAS). You can easily create a VM with 1 TB, 2 TB or more memory with either VMware or Microsoft. The question is not whether or not it can be done, but whether you should do it. Here are a couple examples of what not to virtualize:
- Imaging servers -- This is a low-hanging fruit for most organizations. These servers hold dozens of images used to re-image user workstations. But the images are not small and can grow even larger if you pre-install applications. Adding more images can create a block of fixed data on you virtual infrastructure where some of it is used but not all of it, and you may have to store images for older machines still in production but not being deployed every day. This valid but rarely used data is costing you money by taking up valuable real estate on your SAN.
- Patching servers (both Microsoft and VMware) -- Each vendor has centralized management and deployment for patches, updates and hotfixes that allow administrators to control which server gets what updates and when. These systems are local repositories and can hold dozens to thousands of updates for our systems. When you virtualize these, you are importing required data but also limited use and possibly even stale data.
In both of these examples, the data is not tier one or two quality, but it gets placed on that level of disk because these VMs often share space and resources with higher tier VMs. Cost-savvy admins will move it to SATA disk (if they have it) to reduce the costs, but normally that still resides in your storage frame.
We become so focused on 100% virtualization that we forget about using regular physical servers as something other than virtualization hosts. Modern servers can accommodate terabytes of storage for reasonable costs and provide exceptional performance at very low costs. While you lose the benefits of virtualization, the cost savings can be enough to justify the risk for servers and applications that tend to have larger noncritical static data.
Support has always been a hot topic in the virtualization world. Support for applications running inside of a virtual environment is widespread today, but even as recent as a few months ago, there were still vendors that did not support the virtual environment. One of the last big holdouts was Oracle; however, in 2013 they made the change and now support the virtual environments from both Microsoft and VMware, as long as you pay close attention to the fine print of exactly how to do it.
Now you can always install applications without virtualization support and it may run fine for days, weeks or years. You're playing the odds that you won't need support and, when you do, you hope the vendor now supports virtualization. If you have a problem, it may not be possible to recreate the issue on physical hardware. In those cases, it might be better to stick with hardware until your software vendor supports virtualization.
Is it worth the challenge?
With virtualization, and specifically VMware, there are multiple technologies to overcome some of the possible barriers to virtualizing applications. Raw device mapping (RDM) and other technology allows your hypervisor to be flexible enough to support almost anything you can throw at it. The real challenge in determining what not to virtualize is weighing whether it is both cost effective and simple to support.
We have all seen a real-life example of an approach that works, but is overly complicated and difficult to explain. A common example is setting up Microsoft clustering in VMware. RDM makes it possible, but as you dig into it, you find limitations (such as not being able to perform Distributed Resource Scheduler vMotions with clustered guests). Depending on your design, these limitations can mean losing some of the valuable features virtualization brings or more complex workarounds.
The clear answer
Both Microsoft and VMware have the technology to virtualize almost anything you can imagine. Virtualization is often regarded as a tier one application running on tier one grade hardware. The questions come into play because many times not all of the VMs are tier one or require additional features that bring cost and complexity into the picture. It comes down to what I laid out earlier: Just because you can virtualize something doesn't mean it's the best approach. Your job as a virtual admin is now to help decide what not to virtualize.