Physical-to-virtual (P2V) migration software is the new "Swiss Army knife" of the x86 virtualization tool box....
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Indeed, it can be the handiest tool of all when data centers must be moved.
Physically moving a data center and all of the assets contained within is not inexpensive, uncomplicated or free of risk. Aside from the potential unknowns that can arise from poor planning, downtime, and data loss, a data center move is a huge undertaking for even the most seasoned IT professionals. It's not something a business does frequently enough that it becomes second nature or has a team dedicated for execution.
But, with advancements in x86 virtualization technologies, migrating servers and attached storage is becoming less complex, which in turn reduces risk, costs, downtime and exposure. It is no longer necessary to move a physical box on a truck and cross your fingers that the machine comes up after the move. Virtualization technologies allow you to conduct much of the move via the IP network, lending flexibility to the process, saving time, and reducing expense and exposure.
Mitigate, then move or migrate
In the past, the "forklift" move was one of the few options for data center moves, where a server or asset would be moved physically (via a truck). A less-used option was to dump the data to tape, restore on the target site and manually input updated changes.
Today, more robust network capabilities, coupled with P2V technologies and networked storage, enable organizations to transmit an entire server with its applications and data via the IP network.
Several alternatives have entered this market, all with enviable success and integration points that span more than just the P2V (physical-to-virtual) area, but physical to physical (P2P), virtual to physical (V2P), and virtual to virtual (V2V) migrations.
Widestream acceptance, ease of integration and server sprawl created by x86 servers have likely contributed to the majority of data center issues around connectivity, HVAC, management, and technology turnover. Thus, in the context of a data center move, this single type of asset is the greatest source of questions around the number of boxes to move.
So when planning a move, what are the impacted applications? What are the inter-relationships that have been created by the servers and dependant applications? The move itself might be a compelling opportunity to refresh technology or migrate to virtualized servers during the process. You may even think of the move as a test or testimony to the robustness of your organization's disaster recovery and business continuity objectives.
With P2V software, you can generally keep the source and target servers in sync until you determine the exact cutover point, alleviating the hand-wringing of whether the server, the application, or the data "made it." The time to migrate depends on the amount of data and the capacity of the network link between the sites, so the moves can be throttled.
As with any IT initiative, a data center move offers several opportunities and side benefits. If you do not definitively know the entire list of assets and interdependencies that were created intentionally or unintentionally within the data center, this can be an ideal time to move towards more of a service-oriented architecture (SOA) or information technology service management (ITSM) model.
Understanding the move list should be coupled with analyzing the performance consumed by your data center and how the servers get compacted into move groups. What assets are fully tapped out resource-wise and which are idle, chewing up floor space, electricity and management cycles?
Taking this additional step to understand the performance characteristics of the servers prior to the move allows you to get a good baseline of allocation versus consumed resource and can justify technology upgrades and virtualization.
Who, what, where, when, how and why
After you have a good performance baseline with the requisite application maps, P2V software requires high network capacity between the sites, especially when transferring large amounts of data or if the application has a high data change rate, and of course, distance is also a factor.
The good news is that P2V software is not an all-or-nothing concept; you can choose what servers get moved and how. You can decide how long to keep the servers in sync, and when exactly the cutover should take place or switch the target and source, which keeps the original server in place. If you are building a new environment at the target site, whether physical or virtual, you can choose where and how to migrate or trickle servers over. If you are introducing new hardware -- heterogeneous or homogeneous -- decisions on migrating servers can be an output of the capacity/performance process.
This software and approach is a win-win in that users and business all like the fact that even if new hardware technology is introduced, you have a consistent and constant data stream and negotiated cutover time that has the least impact to the business.
Don't move everything
In my experience, I've seen businesses gain up to a 14:1 server consolidation ratio during this process. Once you see that server performance is running at single-digit percentages, the case for consolidation is not difficult to make.
Be sure to understand the capacity requirements of what needs to be moved and integrate the application base into the timeline. A high level view of a data center move would be:
- Do a thorough physical inventory. Know what you have, don't guess. Take into account the performance attributes so if you are consolidating or virtualizing, you have a baseline. Only physically move what you have to physically move.
- Physical or virtual? What can be virtually moved, what must be physically moved or both? Careful performance monitoring may indicate that an application requires a standalone, non-virtualized server, but you still have the option to use P2V software to migrate the server via the network. Trickle servers/applications over to the new site ahead of the physical move of other equipment.
- Have a backup plan. Determine which data can be replicated via storage array technology to hasten or augment the mitigation plan. If you can replicate, synchronously or asynchronously, you increase your recovery options should a move not work out.
In this era of server sprawl within the data center – thanks to the widespread adoption of the x86 server platform -- the complexity of a physical relocation project can't be overstated. The emerging, but surprisingly mature, technology of virtualization and P2V software can make the physical move process less painful and expensive.
About the author: James Geis is director of integrated solutions development for Forsythe, a business and technology consulting firm. His area of expertise is server, storage, and desktop technology solutions, including x86 virtualization, Microsoft and related desktop technologies, information policy and management, server and storage consolidation and optimization, backup and restore, and data replication and archiving.