Migrating physical systems to virtual machines presents several significant backup challenges. Most organizations must cope with the possibility of fully restructuring parts of their data protection strategy after adding virtualized systems to their production environment.
Several factors typically drive the need to re-architect a data protection solution following a virtualization migration project, including:
- CPU bottlenecks and I/O bottlenecks
- New backup/data protection options
- Licensing changes
Let's begin with bottlenecks. Prior to converting a physical system to a virtual machine (VM), the typical server had plenty of available CPU horsepower for CPU-intensive jobs, such as scheduled backups or antivirus scans. The typical VM will share physical CPU resources with an additional five or more -- sometimes over 20 -- VMs on average. Sharing the CPU means that CPU cycles needed for backup may not be available if individual backups of all VMs on a given server are attempting to run at the same time. Planning tools such as CiRBA or PlateSpin PowerRecon can identify CPU spikes related to backup jobs and map out an optimal VM placement strategy that is fully aware of performance spikes related to backup processing.
Similar to CPU bottlenecks, backup is also extremely I/O intensive. Both network and storage controller bottlenecks are possible during VM backup operations. Backup or restore jobs may traverse the LAN or SAN, and multiple concurrent backup or recovery operations on the same physical host may strain available physical resources.
Even with enough CPU to push a backup, limitations in the number of shared network or storage adapters available to VMs may degrade backup performance; the end result – backup operations that could extend beyond the organization's backup window. Quantifying backup disk I/O and network I/O requirements should be a critical planning element.
New data protection options
All virtualization platforms provide their own unique backup options. I prefer to use virtualization vendor-proprietary solutions when it makes sense as part of an overall backup strategy. For example, a best practice is to perform VM image level backups each time a new OS (operating system) or application update or change is made in a VM. Securing a VM image backup after each major change provides a new baseline image for disaster recovery restores. With an up to date VM with the most recent OS and application patches or changes, recovery should only involve restoring the most recent data files from backup. Nearly all virtualization platforms provide tools for creating live snapshots of VMs, so regularly securing VM image snapshots should be feasible, regardless of your preferred virtualization platform.
While snapshots are a great DR tool, file level recovery is still necessary for most day-to-day restore needs. New backup alternatives such as VMware Consolidated Backup (VCB) make it possible to secure file level backups of Windows VMs from a live VM snapshot. VCB uses a proxy server connected to the SAN to provide serverless backup capabilities for VMware ESX VMs. Use of a proxy server allows a VCB backup to run with little to know impact on the ESX host where the VMs being backed up actually reside. However, VCB does not support file level backups of all VM guest OSes, and many backup products integrate with VCB by running a file system backup agent (or client) on the VMware VCB proxy server.
While technically such as solution is feasible, problems may arise when a member of the IT staff needs to restore a file. When a restore is needed, the user attempting the restore would need to know the name of the VCB proxy system that backed up the VM in order to recover the VM's data. So not only is the name of the VCB proxy needed, but IT staff would need to know whether a system is physical or virtual in order to recover it. Running an advanced client (i.e. Symantec NetBackup) or proxy host agent (CommVault) inside a VM's guest OS and using it to coordinate backup and restore jobs with VCB would provide a means to leverage VCB's features while also providing a client object for each VM in the backup software's restore GUI. Thus, the restore view for both physical and virtual systems would look exactly the same.
Aside from serverless backup possibilities, continuous data protection (CDP) offers additional data protection flexibility as well. CDP tools from vendors such as Neverfail, Double Take Software, CA, Symantec, and Vizioncore can be used to maintain a DR VM across either a LAN or WAN segment. As long as the DR VM remains in an offline state, additional software licensing is not required. By using CDP, production and recovery VMs can be maintained in real time. Required backups for offsite archiving could be run against the DR VMs, thus removing any backup performance hit from an organization's production VMs. Along with new data protection options come new backup requirements as well. VM configuration files -- such as VMware .vmx -- must be backed up, since they contain each VM's configuration information such as memory, virtual CPUs, storage, and network settings, including MAC addresses. Physical host system data needs to be backed up as well. It's imperative that the virtual network and storage configurations of each physical host be backed up regularly. Without a backup, manually reconfiguring virtual network and storage settings on a physical host could take up to several hours to complete.
Once a new backup architecture is devised, you will also need to revisit product licensing and support for all backup products used to protect your VMs. Many backup vendors now license their file system agents per physical host (instead of per VM), so it's possible that some backup licensing and support costs will be reduced. Many backup vendors offer discounted licensing bundles for virtualized environments as well.
Tip of the iceberg
This article took a look at some of the major architectural issues that impact backup in virtual environments. While on the surface, changes in backup strategy may not appear daunting, the number of moving parts -- e.g., working around I/O and CPU bottlenecks, new backup options, shrinking backup windows, new product licensing -- can make architecting backup for virtual environments a challenge. Success is often realized by incorporating best-of-breed planning tools early on in the virtualization planning process and by looking at re-architecting backup strategy early on in any virtualization project.
Fixing problems in backup architecture is always more of a challenge after a virtualization adoption has been completed. With awareness of the backup gotchas that exist, along with acceptance that the status-quo won't protect everything in a virtualized information system, you should be on your way toward architecting a backup strategy that not only protects virtual resources, but leverages new techniques to protect systems better than ever before.
About the author: Chris Wolf is a senior analyst for Burton Group and author of several IT books. Check out a chapter on backup from Wolf's book, Virtualization: From the Desktop to the Enterprise.
This was first published in November 2007