What happens to legacy applications when virtualization and multicore servers enter data centers? While virtualization...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
technologies offer an amazing array of possibilities to IT professionals, older programs need not be sacrificed.
In fact, aside from consolidation, load balancing and enhanced security, another virtualization application involves migrating and hosting legacy applications on newer, more cost-effective hardware. This two-part series focuses on [delete and expands upon] preserving IT investment.
Migration of legacy applications to virtualized hardware
Several options are available for moving legacy code onto new hardware:
- source-based porting
- binary virtualized re-hosting
- binary cross-platform re-hosting
The first path does not usually involve virtualization at all. Rather, it entails porting a legacy application from its original or current host onto a new one. A key requirement for this path is availability of source code. With source code in hand, developers can perform a quick-and-dirty port to accommodate legacy code intact in its prior form and function, or can invest more effort to re-architect the code to leverage the features and capabilities of the new target platform.
The second route allows IT managers to migrate binary-only legacy code onto new hardware by encapsulating and running that code, together with its original operating system, support libraries, etc., in a virtual machine. This path is probably the most familiar to readers of this site, but remember that binary re-hosting requires that the original host and the new one share CPU architectures (for example, Intel Architecture to Intel Architecture, SPARC to SPARC, etc.).
The third and most tortuous path confronts IT teams and embedded developers who need to migrate binary-only legacy code across hardware platforms, such as SPARC to Intel, MIPS to Power Architecture, M68000 to ARM and other seemingly disjointed migrations. This type of migration is accomplished with cross platform virtualization, wherein legacy binary application code, OS and support libraries run in a virtual machine that also supports emulation of legacy instruction sets. Typical native virtual machines use CPU hardware to partition computer resources among guest OSes and applications; cross platform VMs must also accommodate the particulars of a different CPU architecture (instructions, register sets, interrupts and exception processing).
Examples of this technology include:
- Microsoft VirtualPC, e.g., running on Power Architecture Macintoshes
- Transitive Rosetta, supporting execution of Power Architecture MacOS applications on newer Intel Core 2 Duo machines
- Rosetta supporting execution of Sun SPARC programs on Intel and AMD silicon
- Access Garnet VM, supporting execution of DragonBall applications on ARM-based PalmOS phones and of PalmOS applications on ALP, the Access Linux Platform.
Cross platform virtualization may seem exotic and risky at first, but most IT managers actually deploy a form of this technology every time they use Java. A Java Virtual Machine, or JVM, has all the attributes of cross platform virtualization: binary code deployment – byte code, or instruction set emulation of that byte code on the target host. The key difference is that Java byte code represents the instruction set of an abstract rather than specific type of CPU hardware with JVMs supporting both current and legacy Java code on an equal basis.
Cross-platform virtualization performace
A shortcoming of simple cross-platform virtualization is in performance. Emulating a different machine involves interpreting that machine's instructions and tracking resultant states sequentially, delivering significantly slower execution than equivalent native code on the same host system. To overcome this performance bottleneck, cross platform virtualization looks to two main techniques.
The first builds on execution profiling analysis that shows that typical applications spend no more than one quarter of execution time in mainline code. The remaining CPU cycles and wait times are expended inside of run-time libraries and in the OS kernel in system calls. Application source code may not be available to independent software and OS vendors, but the code for OS and libraries often is, internally under open source and even under proprietary licenses. To accelerate guest application execution, cross-platform virtualization suppliers either port the entire OS and support stack over to the target CPU architecture, or selectively create a native implementation of the application programming interface (API) used by guest applications in the course of normal execution. While binary legacy guest applications must themselves run with instruction set emulation, as soon as they call an API, or at least the most frequently encountered APIs, they enter into cross-platform support code with native execution speeds.
The second accelerator shares attributes with Java: JIT, or Just In Time technology. JIT uses a combination of techniques to streamline interpretation and execution of interpreted cross-platform code. Whenever possible, the VM pre-interprets or compiles legacy instructions into native binary form before they actually run and also caches interpreted code in native binary form on-the-fly for potential re-use. These and other compiler-like optimizations help cross platform VMs approach native execution speeds.
In this tip, we explored several paths to migrating legacy code using virtulalization and peeked into the underlying technology. Part two will provide examples of how to leverage both native and cross platform virtualization in real-world migration scenarios with concrete benefits.
About the author: Bill Weinberg is an independent analyst for Linuxpundit.com and serves in a part-time executive capacity for Linux Phone Standards Forum (LiPS). Previously, at Open Source Development Labs (OSDL) he served as senior technology analyst and also managed the OSDL Mobile Linux and Carrier Grade Linux initiatives. Prior to OSDL, Weinberg was a founding member of MontaVista Software, helping to pioneer and ultimately to establish Linux as the leading platform