chanpipat - Fotolia

Why more mission-critical applications should be virtual

Virtualization can bring many benefits to mission-critical workloads, as long as organizations can get over their initial anxiety.

Virtualization has changed the server and operating system landscape. Gartner estimates that 70% of server operating systems in the world were running inside of virtual machines in 2012, and the research firm projects that number to grow to over 82% in 2016. That would represent a significant shift in how the OS is deployed.

About 10 years ago, virtualization was the buzzword of the day, and organizations were adopting a virtual-first strategy. Virtualization was becoming the default method of server deployment, and you needed a good reason to deploy an OS on physical hardware. Web servers, file servers, application servers … everything was going virtual. Everything, that is, except mission-critical applications. These apps were the backbone of an organization's core service, and they were not to be touched. They were simply too important.

The argument for virtualizing mission-critical applications was difficult to win in the early days of server virtualization. In today's IT landscape, that is no longer the case. Applications that are virtualized may well be better shielded from planned outages and in a position to automatically recover from unplanned disruptions. In fact, you may find that your mission-critical applications perform even better in a virtual environment.

Improved availability

Availability used to be a legitimate excuse for avoiding virtualization. Some key applications were considered too important to trust with a potentially risky and unproven technology. This argument was understandable, and is still common today. However, virtualization has matured to a point where this line of thinking needs to be re-addressed.

Almost any company can virtualize more of its applications, and it is not unrealistic for an organization to work toward an end goal of 100% virtualization. Many organizations resist this idea, often because they have had difficult experiences and failures in earlier virtualization attempts. Their skepticism, then, is understandable. However, the problems that derailed those earlier virtualization efforts could almost certainly be resolved by today's improved, more mature, feature-rich technology.

When a physical server dies, you may need to restore its data. In many cases, an OS backup is useless if you can't restore it onto identical hardware. Because of that, you will likely find yourself re-installing an OS, then spending hours restoring software and application data to the server -- not an ideal scenario for a mission-critical application.

With a virtualized server, failover occurs in a matter of minutes. This can be done within the same data center or elsewhere in the world. In the time it takes for a server to fail two consecutive tests and send a message to an admin's phone, the virtual server(s) in question will have relocated themselves to a new physical host and powered back up. The services are recovered and back online faster than the monitoring tools could even detect and notify anyone that a failure had occurred.

Virtualization's effect on performance

People are often afraid that software performance will suffer in a virtual environment. In some cases, this reluctance stems from a database administrator or someone in another specialty role that has dedicated years to fine-tuning kernel parameters and TCP stacks to squeeze every last drop of performance out of their hardware. In reality, CPU power and memory densities have made that practice less important.

In a physical world, many data centers average less than 10% CPU utilization. Now, take that statistic, quadruple the speed of the CPU and increase the RAM from 16 gigabyte (GB) to 512 GB. That is where modern hardware is now.

That increased efficiency means resources are being wasted. By placing multiple operating systems on a single server, however, you could end up with better resource utilization -- and possibly even see improved performance.

Imagine a single server with 256 GB RAM, installed with Microsoft Exchange and 16,000 mailboxes. Servicing that many mailboxes from a single server would be a real challenge. Now, replace the bare- metal install of the OS with a hypervisor and four virtual servers on the same physical server. Each virtual server will have the same operating system and software as was used on the physical server, while hosting 4,000 mailboxes each. By dividing the resources, the operating system and software can better leverage their given slice of the resource pie. In many cases, you may see density and performance improve through virtualization.

Oracle databases were the loudest voice in the "don't-virtualize-this" chorus. As a software company, Oracle even adjusted its licensing models to discourage virtualization. But even that has changed. Oracle has now adopted a licensing model that is friendlier to virtualization.

In 2013, when a projected 55% of Oracle and IBM DB2 databases were virtualized, Wikibon changed its Oracle virtualization from a recommendation to a best practice. In 2016, Oracle and IBM DB2 virtualization numbers are projected to hit 84% of all instances. The world is recognizing how virtualization improves performance and availability.

Next Steps

Benefits to virtualizing mission critical apps

Dig Deeper on Improving server management with virtualization