Data center planning is still a must before deploying high availability (HA) in a virtual environment, even though...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
server virtualization offers a tremendous amount of flexibility.
Virtual machines (VMs) can receive the benefit of immediate high availability using tools like Marathon Technologies' everRunVM, but the availability of secondary applications can be dramatically enhanced with virtualization tools such as VMware Distributed Resource Scheduler (DRS) and VMotion.
Although this is seen as a clear win for organizations, IT departments need to do some data center planning -- thinking about some of the business issues and technical obstacles involved, for example -- before they implement a high-availability data center.
"You have to understand how you want to set up the high-availability environment," said Ray Lucchesi, the president and founder of Silverton Consulting Inc., an independent technology consulting firm in Broomfield, Colo. "There's some complexity here that is kind of a barrier to implementation."
High-availability data center planning process
First, the applications themselves must be evaluated. Many companies own legacy or internally developed applications that are critical but that don't support traditional high-availability clustering. But if they move these applications and virtualize them, they can use everRunVM or VMware DRS to make them more fault tolerant.
In the high-availability data center planning process, managers need to rethink redundant physical servers. In a traditional nonvirtualized HA environment, relatively simple servers would typically handle a single operating system and application. A data center setup, for example, may include two redundant Exchange 2003 servers.
In that case, it's possible to virtualize both redundant servers on a 1:1 basis. But in most cases, a virtualized server is consolidated to host numerous VMs. As a result, server resources -- CPU, memory, I/O and network connectivity -- must provide enough computing power to support the anticipated number of VMs.
For clustered servers running duplicated VMs, concerns about added or extra headroom are not as important because a second server in the cluster already hosts one or more duplicate VMs. No additional VMs need to be loaded from storage. Experts note that as long as their physical requirements such as power and cooling are met, blade and standalone servers work equally well for virtualization.
High-availability data center planning considerations
But virtual servers that are not clustered may need to provide supplemental processing capacity to accommodate VMs that are failed over from other servers. This important capability enhances the availability of other VMs that are not protected with HA tools, but IT administrators must work out a failover plan in advance.
"You need to have your hardware and hypervisor in sync so that when a particular piece of physical hardware goes offline, the hypervisor management knows where to put [or fail over] the virtual machine," said Dave Sobel, CEO of Evolve Technologies in Fairfax, Va.
Managers frequently overlook this kind of data center planning. But allowing virtualization software to select available failover locations automatically may cause unexpected resource shortages. In turn, this may lead to poor performance or other application crashes on the virtual server that receives the failed-over VM. One way to forestall potential resource shortages is to allocate one or more additional servers as dedicated failover platforms.
Data center managers can enhance application availability -- and mitigate risk even more -- by considering the distribution of VMs across physical servers to load-balance them. In general, they should avoid placing multiple critical VMs on the same server. Consider the situation when two critical VMs like Exchange Server and SQL are on the same consolidated server. If the two VMs are replicated to a second clustered server, both are highly available but still rely on the second server for support.
Some organizations opt to enhance availability even further by placing, say, the Exchange Server VM on server A and the SQL VM on a server B and by using a third server in the cluster to replicate both VMs. This way, if the Exchange VM fails on server A, Exchange continues to work from the replicated third server, but the SQL VM on server B is unaffected on the second host, which continues to operate normally.
Data center high availability trends
The introduction of virtualization has blurred the line between data center high availability and disaster recovery (DR). With virtualization, data protection and recovery tasks that once took hours are now possible in a matter of minutes.
"What you previously had to do with HA, perhaps you can do now with strict DR technologies," Sobel said. "It's up to each individual business to define where their line of HA versus DR is, but they really kind of are the same thing; the difference is time."
Sobel added that HA and DR are not mutually exclusive and can easily coexist in the same virtual environment. Virtualization has also driven down the costs of HA and DR, allowing data center managers to offer cost-protective protection to more applications in a way that just a few years ago might have been impractical.
In the near term, enormous competition will likely drive virtualization features and capabilities. Microsoft bundled Hyper-V R2 and more virtualization features in Windows Server 2008 R2, and Citrix threw down the gauntlet by releasing its enterprise-class XenServer 5 platform for free.
Over the longer term, Lucchesi points to the emergence of cloud computing and Software as a Service for even greater abstraction.
"It's a restructuring of the applications," he said, adding that cloud architecture further decouples applications from specific servers. "Once [the application] is in the cloud, the whole dynamic of HA changes considerably."
Implementing an application for the cloud is not easy and may not be practical for many existing applications today. Still, Lucchesi said he expects HA to eventually become enabled and supported by the cloud.
About the Author
Stephen J. Bigelow, a senior technology writer at TechTarget, has more than 15 years of technical writing experience in the technology industry. He has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference and Bigelow's PC Hardware Annoyances. Contact him at email@example.com.