Leveraging legacy IT with virtualization

Migrating legacy applications to virtual machines maximizes the benefits of next-generation hardware. Learn how to retain older programs and preserve IT investments by following these examples of native and cross-platform virtualization migration.

This Content Component encountered an error

The biggest obstacle to taking advantage of newer and faster hardware, especially of multicore CPUs and multiprocessor systems, is the need to protect and preserve legacy IT investments. Organizations invest substantial resources in building and integrating critical applications and software infrastructure. These IT professionals look to virtualization to maximize next-generation hardware without incurring substantial incremental migration...

costs.

The first part of this series reviewed different paths to migrating legacy code using virtualization with a view into the underlying technology. This tip provides examples of how IT teams can leverage native and cross-platform virtualization in real-world migration scenarios with concrete benefits.

Legacy IT and virtualization
Preserving legacy IT investment with virtualization

Full image re-hosting and consolidation
The most common scenario entails moving an application, compute load or stack to newer, faster hardware. This often also involves consolidation of multiple legacy compute loads onto a single host. While it is certainly possible to consolidate multiple legacy compute loads on a single physical host running the same OS, IT teams frequently encounter a number of obstacles to such migration:

  • Legacy OS/version availability and performance on new host h/w
  • Incompatible dependencies on libraries/versions, middleware, kernel modules and device drivers among migrating compute loads
  • Interactions among once separate compute loads / processes migrated to run on single OS instance, e.g., synchronization and contention issues
  • Unforeseen integration challenges including need to rebuild with static vs. dynamic linking, ad hoc use of file systems and uncatalogued legacy hacks

Virtualization provides a more straightforward migration path. It avoids most or all of these pitfalls by retaining the separation and run-time attributes of each legacy compute load in its own virtual machine, along with an instance of the legacy OS/version, libraries, middleware and other software upon which legacy depends to function correctly. In many cases, legacy system software can migrate intact, bit for bit, and require no further integration, QA or validation. In other cases, the challenges to carefree image re-hosting are few and more easily surmountable:

  • Reconfiguring subnets and IP address dependencies when migrating into different network topologies
  • Eliminating BIOS and updating boot sequence dependencies
  • Configuring virtual driver support for legacy hardware and/or paravirtualizing legacy device drivers

Furthermore, this migration path works best for legacy OSes with off-the-shelf support from hypervisor software, as with most versions of Windows, of Linux and BSD and increasingly Solaris. Other legacy OSes like AIX, MacOS, SCO Unix/Xenix, VMS and also embedded OSes will at least need targeted support for imaging and installation and may also need cross-platform support (see below).

Image re-hosting with enhanced availability
The previous scenario offers more than just an opportunity to consolidate hardware and improve performance – it can confer legacy loads with better up-time and greater overall reliability. As described in "Achieving high availability in a virtualized environment," re-hosting legacy loads with virtualization provides a range of opportunities for increasing system availability by:

  • Testing and sandboxing migrated loads within virtual machines
  • Monitoring health of entire migrated loads as unique virtual machines
  • Using virtual machine snapshot functionality to perform checkpointing
  • Running spare virtual instances of legacy loads in a cluster for faster failover

Cross-platform re-hosting
Standard virtual machine technolgy facilitates legacy code migration when the legacy and target migration CPU architectures are homogeneous – IA/x86 to IA/x86, SPARC to SPARC, Power Architecture to Power Architecture, etc. As described in the first part of this series, heterogeneous migration in binary among these various architectures, and/or others requires a cross platform solution.

As a real-world example, consider moving a legacy Sun SPARC/Solaris load to IA/x86 commodity hardware running Linux. A first generation migration typically involves top-to-bottom emulated execution: all code from the legacy system, from the application down to middleware and libraries, device drivers OS and even boot code or BIOS execute in a virtual machine that also emulates the instruction set, states and behaviors of the legacy hardware.

This approach, while comprehensive, suffers from performance deficits from the need to emulate each legacy instruction on target virtual and physical hardware. With a reduced instruction set computer (RISC) architecture like SPARC, IA/x86 emulation will involve multiple complex instruction set computer (CISC) instructions to run for each of the more numerous (albeit simpler) SPARC instructions, especially to accommodate the load-store nature of RISC instruction sets.

Cross-platform VM suppliers invest substantially in optimizing the exacting emulation process through pre-compilation, just-in-time processing and instruction caching. However, the performance gap usually remains so large as to make full stack cross platform virtualization useful only for evaluation or stop-gap purposes.

Optimized cross-platform re-hosting
IT professionals with performance-sensitive legacy loads need not despair. Cross platform performance gaps can be spanned with two complementary approaches: faster target hardware and incremental native emulation.

While chipset-level emulation on comparable hardware may not make the performance cut, most migration target hardware is chosen precisely because it outperforms the legacy boxes and boards it replaces, in some cases by large factors. So, while emulated legacy software may not leverage the full compute capability of newer systems and CPUs, it will generally run in parity with prior deployments, and usually will run substantially faster, in spite of emulation overhead.

Further improvements in performance of emulated code come from migration of cross execution of commodity code – OS, base drivers, libraries, middleware – to run natively, architecture-wise, inside the same virtual machine as fully emulated code.

Optimized cross-platform execution of legacy applications and support code
Such native migration is the equivalent of native porting for support code, not the value-added application code maintained by IT organizations and application developers. As such, it is an activity best left to cross-platform VM suppliers like Transitive, Microsoft, ACCESS and others. However, corporate IT teams, independent service provicers, developers and open source hackers can follow the lead of such cross platform VM suppliers to optimize their own applications by incrementally migrating performance-sensitive sections of their code from emulated execution to running natively by recompiling and rebuilding, whenever possible.

Conclusion
Virtualization provides a valuable toolbox for IT teams and developers who need to preserve legacy investment in code, configurations and quality assurance. The shortest path and most easily realized benefits come from homogeneous migration of compute loads to virtualized instances of legacy systems on newer, more powerful and usually consolidated hardware. Users, maintainers and developers of legacy systems on more exotic CPUs and systems can also benefit from migration to newer/commodity hardware, through heterogeneous migration using cross platform virtualization. Both homogeneous and heterogeneous schemes feature multiple migration paths and options for making incremental investments in optimizing re-hosted application performance.

About the author: Bill Weinberg is an independent analyst for Linuxpundit.com and serves in a part-time executive capacity for Linux Phone Standards Forum (LiPS). Previously, at Open Source Development Labs (OSDL) he served as senior technology analyst and also managed the OSDL Mobile Linux and Carrier Grade Linux initiatives. Prior to OSDL, Weinberg was a founding member of MontaVista Software, helping to pioneer and ultimately to establish Linux as the leading platform for intelligent devices.

This was first published in May 2008

Dig deeper on Reducing IT costs with server virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close