This is the second part of a two-part series. For more information on virtualizing enterprise applications, read the first installment, “What to Consider Before Virtualizing Mission-Critical Applications.”
There is a difference between what can be virtualized and what should be virtualized. However, in the end, the arguments against virtualizing mission-critical applications are simply losing their validity. The advantages to virtualizing these applications are too great to overlook. With a well-designed virtual solution, it is very difficult for an organization to argue against virtualizing.
Since each application may require a unique approach, below are some examples of the thought process behind virtualizing a few popular mission-critical workloads and attaining those server virtualization benefits.
Web servers. Web servers are often inexpensive resources, almost a commodity in many organizations. They have low resource demands and are often deployed in groups. Few organizations would consider a Web server a mission-critical application. But an Internet presence is essential to conducting business in today's environment. Targeting Web servers is a win-win for virtualization. They have a small footprint that is easy to virtualize, and they benefit greatly from the high availability and agility offered by virtualization. Demand for Web servers can also be closely linked to seasonal trends and business cycles, allowing them to benefit from virtualization's ability to rapidly deploy and decommission virtual machines (VMs).
The best method for getting your hands on those server virtualization benefits is to first create a virtual Web server and migrate a website to it, as that will result in a cleaner VM. However, Web servers are generally highly tolerant of minor imperfections in OS configurations. Though not always recommended, Web servers can also be virtualized using physical to virtual (P2V) migration tools.
Application servers. Application servers cover a range of performance profiles. Depending on the application they host, they can be anything from a small server hosting a simple JSP or .NET application to a large server hosting a complex Java application. Size and complexity of application servers often have a direct correlation to the role an application plays within a business. With a complex Java application, the application server is both mission-critical and a tier-one resource -- and the size and complexity of the application also make it difficult to deploy. Regardless of whether it is physical or virtual, deploying an application server requires precision tuning of both the OS and application.
Virtualization provides several advantages here. In many cases, the underlying infrastructure is more easily tuned in a virtual environment. This includes network devices, CPU resources, memory and other key resources. After the tedious task of tuning the infrastructure, the OS and the application to achieve the desired performance, virtualization allows you to quickly and easily create a clone of that VM. This makes future deployments more efficient and accurate. Decoupling the VM from the physical hardware also insulates the application server administrators from having to reproduce this effort every time a new hardware platform is adopted.
As application performance can be closely linked to underlying infrastructure, do not take the task of virtualizing an application server lightly. Though not difficult, it can be a time-consuming task of tuning resources, measuring performance and then adjusting resources again. Under no circumstances should you use a P2V migration tool to virtualize a mission-critical, tier-one application server. This brings over too many legacy settings from one hardware platform to another, and overcomplicates the configuration tasks required to create a stable environment.
Database servers. Unlike Web servers and application servers, database servers are rarely configured to spread production workloads across multiple resources. More often, databases are deployed in an active/passive cluster as a single standalone resource. Database servers can be even more complex and sensitive to OS configurations than application servers and significantly more resource-intensive.
All these attributes should indicate a flashing caution sign for any virtualization administrator who wants to virtualize a database server workload. But even the biggest and most complicated database servers can benefit from virtualization. Granted, some databases may require 100% of a virtualization host's resources, but the high-availability and portability features provided may justify the effort.
Many databases can exploit software clustering features to provide a rapid recovery of database services in the event of a hardware failure. Unfortunately, these features can also require expensive licensing and result in a very complex configuration. The more complex a configuration, the more likely it is to experience issues from human error.
In contrast, most hypervisors provide high-availability features that can move a failed database server to new hardware and reboot it almost as quickly as software clustering can restore the same database services. High availability within the hypervisor does not require additional database software licensing and will not require any complicated configurations in the database environment. What may add only one to two minutes to automated recovery tasks can save hours in maintenance tasks -- a significant server virtualization benefit.
Like an application server, a database server must be tuned to the specific hardware resources and operating system that make up the underlying infrastructure. This makes the use of P2V tools difficult and also complicates the task of restoring a reliable database service on disaster recovery hardware. However, when the database server is built and tuned for virtual hardware, almost any x86 server platform will make a suitable recovery host.
Microsoft Exchange. Microsoft Exchange is an excellent example of a high-performance environment that thrives on virtual hardware. At the same time, Exchange 2010 introduces new features that illustrate why it is important to know your application before you virtualize it.
Microsoft Exchange 2010 introduced database availability groups (DAGs). A DAG synchronizes data from multiple servers, allowing for almost immediate failover of a workload with a purely software solution. A DAG will work between any combination of virtual and physical hardware platforms. With this feature and the relative ease of deploying and maintaining it, why virtualize Exchange? Server virtualization benefits still provide advantages in an Exchange environment. While software features may offer a rapid failover of services, a down server will still lead to diminished capacity. While the environment runs on diminished resources, the remaining servers are carrying additional workload, increasing the risk of a second and more costly failure. This is where a hypervisor platform can quickly detect a failure and reboot a VM on another server, thus restoring redundancy within the environment.
Since Microsoft Exchange supports the live migration of mailboxes, using P2V tools to move a physical Exchange deployment to a virtual environment is not necessary.