Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

What to consider before virtualizing mission-critical applications

If they're not broken, why fix mission-critical applications by virtualizing? The same reasons you virtualize other apps: Efficiency and flexibility.

In IT, many users have abandoned physical servers and traveled far down the virtualization road. As virtualization has moved from the trendy minority to the trusted majority, those who have resisted it are considered dinosaurs clinging to yesterday's practices. But one has to ask, "Are they resisting the inevitable or simply protecting their business?"

When the ability to conduct business is on the line, IT departments exercise an abundance of caution. For some, the prospect of virtualizing mission-critical applications has been considered off-limits because, frankly, why fix what isn't broken? But over the past few years, this mentality about virtualizing critical applications has shifted.

What are mission-critical applications?

A mission-critical application is an essential component of core business functions. A failure or interruption in mission-critical applications can have a severe impact on an organization's ability to conduct business. The term tier-one application is often used synonymously, though it refers to the performance needs of an application. A tier-one application requires finely-tuned resources and reliable hardware to provide the desired performance metrics. While it cannot be said that all mission-critical applications are also tier-one applications, it is highly likely that any tier-one applications will be mission-critical.

Why virtualize mission-critical workloads?

So, first, it may be time to revisit the "Why fix what isn't broken?" philosophy of deploying mission-critical applications. Five years ago, if a physical server used only a third of the available processing power, or just a fraction of the available memory resources, it would be labeled as broken. That is exactly where we find ourselves in today's data center. A physical server now holds more compute resources than the average operating system or software platform can use. In a physical server environment, efforts are being needlessly replicated, valuable resources are left untapped and power consumption is increasing. At the same time, the value of these mission-critical applications is constant. What was once conservative and safe is now beginning to look broken.

With the power of modern server hardware and hypervisors, you no longer sacrifice high-end performance to gain virtualization benefits like high availability and improved resource consumption. Given their ability to fully exploit the vast amount of resources available in modern x86 servers, virtual platforms can often yield the same performance as physical servers, if not better.

A virtual server is also portable, no longer tied to a specific piece of hardware. In terms of availability and disaster recovery, this is a significant advantage. Whereas recovering a physical system often requires a second set of identical hardware, almost any x86 hardware can now be enlisted to recover a virtual server. And what application could be more in need of a solid and efficient disaster recovery or high-availability solution than a mission-critical application? In fact, even if an application's virtual performance may not match the performance in a physical environment, availability gains could outweigh small dips in performance.

Designing a mission-critical infrastructure

Once the decision has been made to virtualize, you need to build appropriate strategies for virtualizing mission-critical applications. You may need to develop unique strategies for each. If one application has multiple components, evaluate the benefits of affinity and anti-affinity rules, which control where VMs can be located, to either keep components on the same physical host or to force them to run from separate hosts. In some cases, you may want to run from the same host to improve performance. Other applications may require the resilience of spreading infrastructure components out on different hosts. You may also need to organize hypervisor clusters or use affinity rules to adhere to licensing requirements. Though it is rare, you may also want to consider dedicating an entire virtualization host to one virtual machine (VM). This is usually done for licensing or performance reasons. Even though it does not aid in consolidation and reducing footprint, it provides advantages in the areas of availability and recoverability.

Another critical decision is how to move the workload from a physical server to a virtual one. While physical-to-virtual (P2V) conversion tools may be adequate for other applications, be careful about using them with mission-critical applications. The settings of the OS and subsequent applications were originally customized for a physical server and, though conversion tools are designed to find and adjust these settings during migration to a VM, a setting can be overlooked. When every millisecond counts in the performance and availability of an application, you don't want to bring over artifacts from an install that was not intended for the virtual server.

Treat this migration like a hardware refresh. The operating system and applications should be installed fresh. Where feasible, even configuration files should be created anew. Recognizing that manually recreating configurations may also introduce risk, use your knowledge of the application or contact the application vendor for advice in deciding which configurations can be safely migrated without carrying over legacy attributes.

Measure twice, migrate once

Never forget that you are dealing with critical infrastructure. These environments should already have monitoring tools in place to measure response times, performance metrics and availability. Before making changes, have solid data to provide a baseline of how the environment behaved prior to those changes.

Also ensure that you have at least 45 days of data, to include any weekly and/or monthly business cycles that may generate fluctuations in usage and performance. If an application slows down the week after it is virtualized, no one will acknowledge that the performance is normal during month-end processing. All that users will know is that you virtualized the application and now it is slow. Have before-and-after snapshots to defend against these attacks or to troubleshoot the valid issues that may emerge. The more granular the reporting, the better. Some issues will be real, most will be imagined, but all must be given the attention that mission-critical applications deserve.

For more information on virtualizing enterprise applications, read the next section in this two-part series, “Server Virtualization Benefits for Mission-Critical Workloads.”

Next Steps

Virtualize workloads with virtual SAN and virtual Fibre Channel.

This was last published in May 2013

Dig Deeper on Application virtualization

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Are your mission-critical applications virtualized?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close