While there are many tools to help IT professionals migrate from physical machines to virtual machines (P2V) during...
a virtualization rollout there are very few that help them go the other direction, virtual to physical (V2P). This is ironic since the P2V typically is a onetime use case, on virtualization rollout, where the V2P could be a continuous and ongoing need.
Reasons to migrate back to physical servers
The need to make a virtual-to-physical move may seem counterintuitive at first but it is a very needed capability, so much so that VMware has a tech note that steps through the manual process of doing this. Their stated primary reason, and certainly one of the most common, is for support issues. While most major software manufacturers support running their applications in a virtualized environment, almost every data center has a few customized applications that are written by a smaller manufacturer that may not support the technology in a virtualized environment. Even if support in a virtual environment is provided by the application producer, you may want to choose to unvirtualize just to eliminate a variable.
The second need is in response to a common justification for virtualization. Many virtualization projects are justified on the concept of improving disaster recovery (DR) cost metrics. This is done by keeping server count at the DR site to a minimum by using server virtualization at the DR site, virtualizing servers from the primary that were once standalone physical machines. It is an excellent use of the virtualization technology and greatly simplifies and reduces DR site costs. The challenge is how do you get out of this model once the disaster has passed without following VMware's detailed step-by-step process?
The third use may be to resolve performance-related issues caused by virtualizing the workload of a standalone machine. This might be immediately obvious soon after virtualization, where the workload does not behave well in a virtualized environment. This often happens after the move to production. The new virtualized server performed well in test, and maybe initially in production, but as the user load on the application increased, its effect on the virtualization host started to affect the performance of the other virtual machines on that host.
Another case that may not be as dramatic is a workload that has specific heavy load time. For example, if you have a process that burdens down a virtual machine twice a year it might make sense to move that workload to its own standalone physical machine during those timeframes and then re-virtualize once complete. This could be especially necessary when the workload increase results in heavy network or storage IO.
The final use may be as a server migration tool: to move a physical server to another physical server quickly. This would be a situation where there is a need to upgrade the compute or IO resources available to a physical standalone server but you want to have some of the abstraction of virtualization.
P2V, V2P tools
There are few tools available to help you with this data move and they can provide a cost-effective but manual process, as the step-by-step VMware example proves, or they can be fully automated infrastructure technologies. The tool you choose depends in large part to your reasons for wanting a V2P tool.
V2P solutions are the result of end-user demands as they become more immersed in a virtual infrastructure. Traditional virtualization software solutions from companies such as Vizioncore or Platespin are coming to market with V2P technologies and infrastructure virtualization software technologies like those from Scalent or Unisys have offered V2P capabilities for quite a few years.
If your primary goal is to be able to resolve a support request, either utility software technology or infrastructure virtualization (IV) technology will help resolve the issue. That said, virtual infrastructure products offer more than just a resolution to this support issue. It becomes a matter of copying out the virtual image to the standalone physical machine with these products.
More attention must be paid to the type of hardware to which you are moving the image. It must be nearly identical to the virtual platform on which it was installed. With IV, you merely have to point a bare metal machine to the networked server image, boot and go. The core difference here is time. The software applications are typically offline processes. There is no need to be concerned about the actual physical hardware, as IV still offers a level of hardware abstraction.
If your primary goal is to ease a P2V-V2P disaster recovery plan, again, the software provides a simple tool that will help you replicate those physical images, have them prepped in a virtual image and ready for activation in a disaster. Most of these utilities can do their own replication and do not require a special storage platform to perform replication. They are also now adding the capability to unvirtualize after the disaster has passed, and these tools provide an automated method to accomplish this.
Another factor here is recovery from your backup software to physical standalone hardware. The problem again is speed and the need for near similar hardware. For critical servers, this may not be appropriate.
Infrastructure virtualization extends this capability and adds the ability not only to automate the P2V and V2P movements but also to manage storage and network connections as they change. Additionally IV solutions have the ability to power on and power off devices, so in a DR site the standby servers could be in a powered-off state. The only device that would need to be powered on is the IV controller and the storage replication target. As mentioned with IV, these moves can be made near real time and can be somewhat automated.
When a workload does not perform well in a virtualized environment, most of these tools can help with a move back to a physical environment. The software will again require greater attention to the actual physical hardware but it is a simple solution to resolve a support issue. Infrastructure virtualization can again help with the support-related issue and aid with movement to a physical system without as much concern over the actual physical hardware that the workload is being moved to.
IV is better suited for automated moves based on overall workload conditions such as seasonal spikes in compute or resource consumption. An administrator from an IV remote GUI could power on a previously powered-off physical machine, set up the appropriate storage and network connections, and then point the machine at a boot image that would take over the processing for this workload. The workload could be saved to a template that would automate all of the connections and make the seasonal activation as simple as clicking a button. The simplicity is important, the quicker and more often the process the more aggressive the user can be with these seasonal adjustments and they may find it easy enough to move to dedicated physical machines on weekends to handle end-of-week processing.
The decision on whether to use software tools or infrastructure virtualization is probably going to be decided more on the size of your organization. The software tools are valuable as point solutions to enterprise customers but they are typically better served by all the capabilities of an infrastructure virtualization. The small to medium-sized enterprise will likely find IV overkill for its needs and that the software tools will be more than adequate and more cost effective for their requirements. The overriding factor is response time -- if your business needs the near real-time conversion from V2P and P2V the IV may be appropriate no matter what its size.
About the author: George Crump is President and Founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN. Prior to founding Storage Switzerland he was CTO at one the nations largest storage integrators where he was in charge of technology testing, integration and product selection.