Successful virtualization disaster recovery doesn't happen by accident. Success requires careful consideration of several factors, including network,
An infrastructure evaluation normally begins with network (LAN/WAN) resources that connect primary and secondary sites. Synchronous replication duplicates each write between sites and maintains real-time synchronization between them. However, this bandwidth-intensive approach generates the highest costs and is limited in distance.
Asynchronous replication provides periodic synchronization between sites as time permits. This can achieve acceptable results over greater distances using lower-cost WAN connections like T1 (or slower). However, it does require a company to allocate more time for the recovery point objective.
When evaluating bandwidth, remember that replication processes shouldn't impair user access or other daily network performance. "If you're doing several writes at the primary site, the workload across the network increases," said Ray Lucchesi, founder of Silverton Consulting Inc., an independent technology consultant in Broomfield, Colo. If the disaster recovery site will run "hot" to support actual users, there should be enough bandwidth and network connectivity to support the anticipated user load.
Fibre Channel over Ethernet (FCoE), which is designed to support IP and storage traffic on the same interface, can also influence virtualization. Not only does FCoE promise to simplify storage networking, it can also reduce interface costs through the use of integrated adapters such as the Emulex LP21000 series.
"We're talking about combining a fiber HBA and a network card in one interface at the server and switch levels," said Pierre Dorion, data center practice director at Long View Systems, an IT solutions and services company headquartered in Denver. Dorion pointed out that fewer network adapters will simplify server hardware requirements and reduce power consumption, adding that network virtualization can further enhance the flexibility of a virtualized data center or DR site.
Virtual server size and performance
While there are no definitive requirements for a virtual server, you must size the server's resources -- CPU, memory, I/O and network connectivity -- according to the number of virtual workloads that will reside on it. Experts recommend testing a virtual machine (VM) setup prior to an actual rollout to ensure that application performance remains acceptable on the target hardware. Once this is determined, administrators can recommend hardware upgrades or test various configurations or resource throttling options to ensure that each VM runs properly.
Backups can hinder virtual server performance, so evaluate the machine's application performance while replicating VMs in a DR setting. "Backup is probably the most I/O-intensive operation you can subject a server to," Dorion said. "If you're running 15 servers at a time on one physical system, you can really bring a physical server to its knees."
The performance implications of backup/recovery processes can radically affect consolidation choices for an enterprise. For example, five VMs might run well on one physical server, but it will slow to an unacceptable crawl during a backup. Dorion noted that there's nothing wrong with hosting a single VM such as SQL Server on a single physical machine since this approach still produces the benefit of nondisruptive migration.
Stephen J. Bigelow, Senior Technology Writer, can be contacted at firstname.lastname@example.org.
This was first published in October 2009