This tip explains how server virtualization will benefit Linux and Unix operating system adoption but will most likely harm Windows Server adoption by relieving the one-operating-system-to-one-server modus operandi. You'll review the drawbacks of using Windows operating systems and the rewards that virtualization offers to not only IT architects but end-users and application vendors and learn how these factors will combine and affect the current use of Linux, Unix and Windows operating systems.
Like dinosaurs in their time, the monolithic, commercial operating systems (OSes) have ruled their world for an age. Server virtualization has heralded a new era, however, and OSes as we have known them probably won't thrive in it.
As its uses continue to grow, server virtualization will pose a major threat to the strategic position that the general-purpose operating system has long held in the x86 software stack. In this situation, Microsoft in particular has a lot to lose. So do Linux and Unix vendors, but these OSes do have advantages in a virtualization setting.
In this tip, I describe the weaknesses of monolithic platforms and how server virtualization offers relief. This is part two of my three-part series on the impact of server virtualization on operating systems. Previously, I looked at how server virtualization is upending operating systems. Finally, I'll cover the challenges posed by this major change in the IT landscape.
Windows, Unix and Linux
For the past 30-plus years, general-purpose operating systems such as Linux, Unix and Windows have occupied a strategic rung in the ladder representing the software stack on x86-based servers. The operating system has controlled how users and applications access both system hardware resources and devices that reside on the system or network.
The OS has also provided the platform on which applications run, along with the interface that users go through to run and interact with those applications. And the OS has provided the key interfaces for developers to write code and for administrators to manage the entire (hardware and software) system.
As the driving layer in the x86 software stack, general-purpose operating systems have provided many benefits, many of which are now taken for granted. End users of operating systems such as Windows and Linux have been able to count on the availability of a large number of useful applications, which as a rule tend to run reliably on each new OS release. Similarly, these OSes have supported a broad range of storage, networking and other peripheral devices that enrich the value of the server platform.
Users have for the most part not needed to worry about qualifying and testing each new device and application they install, since this burden has largely been borne by the device and application providers working directly with OS vendors. Both developers and system administrators have benefited from the useful tools and interfaces that the vendors pre-qualify, test and support with each new release. All of these various constituencies have derived substantial value from the tools, facilities and solutions that come with the OS.
But the monolithic, commercial operating system also has some significant downsides.
Microsoft Windows operating system drawbacks
Microsoft Windows, in particular, has suffered from technical issues, which seem to have grown in number with each new generation. As the lines of source code in Windows releases have increased with each new OS family (e.g. on the desktop,) from an estimated 40 million in Windows XP to more than 50 million in Vista, the number and layers of interdependencies have grown as well. The newly written code in these releases tends to have a number of flaws, which are often not fixed until the second or third release (i.e. service pack 1 or 2). We believe that a simliar pattern has occurred with Windows Server releases.
Microsoft Windows releases have also traditionally been plagued with security vulnerabilities. Though Microsoft has made considerable progress in fixing these holes, specifically in the Windows XP Service Pack 2 release in 2004 and more recently in Vista, new releases of the operating system are still a big target for hackers. As a result, most large corporate customers now wait at least 18 to 24 months before deploying a new Windows operating system.
In addition to these technical drawbacks, a number of companies have expressed concern about the value they get as a Microsoft customer. Increasing licensing costs over the past ten years have led a growing number of companies to evaluate Linux as an alternative for their server environments. Over that time, the Linux share of the x86 server market has grown significantly.
As the time between major Microsoft releases has lengthened over time, some industry observers have also questioned whether innovation is happening quickly enough. In Microsoft's defense, these are difficult problems to solve, and are arguably inherent in large, complex software programs.
Though a growing number of server customers are embracing Linux for at least some of their application needs, a large percentage are still dependent in one way or another on Microsoft technology. Even in the best case, migrating enterprise applications and data from Windows to Linux can be a daunting and expensive task. Many companies, both large and small, would prefer the flexibility to run the application best suited for each function or operational task, irrespective of the operating system it happens to run on, and without the associated costs and disadvantages of running and supporting a monolithic, commercial operating system. For many such companies, server virtualization provides a welcome alternative.
Server virtualization's benefits
Companies seeking to consolidate their server resources and reduce IT costs have increasingly adopted server virtualization as an alternative to the "one-OS-to-one server" model that has long been the industry standard.
Server virtualization brings customers compelling benefits, including greater server utilization, increased flexibility and lower costs. Though industry observers agree that less than 10% of x86 servers have been virtualized to date, that penetration rate is expected to grow rapidly over the next few years, as core virtualization capabilities are embedded in server hardware.
As its use continues to grow, server virtualization will pose a major threat to the strategic position that the general-purpose operating system has long held in the x86 software stack.
In full, hardware-based virtualization architectures, such as those provided by VMware and Citrix, the hypervisor serves as the key intermediate layer between hardware and applications. As a thin piece of software that runs directly on server hardware, the hypervisor translates application logic into machine code that can be executed in hardware as x86 instructions. Embedded hypervisors such as VMware ESX Server 3i and XenExpress OEM Edition will be pre-integrated in server hardware, and will boot automatically along with the server.
By interposing itself between operating system and hardware, the hypervisor in effect drives a wedge between the OS and hardware vendors, and threatens the controlling position that vendors such as Microsoft currently occupy in the software stack.
While hypervisors provide a key layer between application and hardware, and offload traditional OS functions such as the allocation of CPU, memory and networking resources, they do not handle the user and application oriented functions played by the OS. In a virtualized server architecture, these functions take place inside of a virtual machine, in which applications are paired with an associated operating system. Thus, the operating system and associated software running inside a virtual machine must provide a file system and user interface, as well as an interface to independent systems and application management products that might be running in the environment. The operating system today must also provide drivers for local peripheral devices such as printers.
A growing number of application providers are now packaging their software in virtual appliances, coupled with the appropriate operating system and middleware components. The OS and application are pre-built and pre-configured, so that they are ready to run when the virtual machine is started.
Most virtual appliances today are based on Linux and include a Web-based interface, Web services API, some type of connection to shared storage, and basic firewall.
The use of virtual appliances enables independent software vendors (ISVs) to optimize the software stack in advance by tailoring the operating system to the application. End users download the appliance in a secure, self-contained virtual machine, and then run it on the appropriate virtualization platform. Upon full adoption of the Open Virtual Machine Format (OVF), which is expected in the near future, users will be able to download OVF-packaged virtual appliances, and run them on the virtualization platform of their choice, including those offered by VMware, Citrix and Microsoft.
Benefits to end-users
The effective "re-layering" of traditional operating system functions into hypervisors and virtual appliances will greatly benefit end users. Due to their streamlined design and small footprint, including only a minimal number of external interfaces, embedded hypervisors like VMware ESX Server 3i and XenExpress OEM Edition reduce the potential for both software failures and security breaches. As these embedded hypervisors mature, end users should enjoy greater reliability and security by running them than they would get by running Microsoft Windows as the native OS. Users will also benefit from much faster deployment times, since both the hypervisor and virtual appliance are designed as ready-to-run modules.
Compared to traditional operating systems like Unix and Microsoft Windows, the smaller code base and more focused functionality of these server virtualization components should significantly accelerate innovation, since each software layer will be quicker to update and easier to maintain.
Benefits to application vendors
Application providers also benefit in a big way by packaging and distributing their software in virtual appliances. Think about what a typical ISV must go through today: upon each new release cycle, the ISV must test each "flavor" of its application on every basic hardware platform and OS version it supports.
Since most ISVs support multiple (from as few as three to four, to as many as 10-15) server platforms along with a number of different OS versions, the scope and complexity of the resulting test matrix can be daunting. Once the OVF standard has been fully adopted, ISVs will have the luxury of writing and testing their application for a single hardware platform and OS build. OVF compliance should enable the virtual appliance to run across multiple hypervisors, following an automated conversion step. This will dramatically streamline the typical ISV test matrix, reducing both testing costs and time-to-market.
The virtual appliance also simplifies ISV distribution, by enabling providers to offer their applications for download in a secure, self-contained environment.
Linux and Unix up, Windows Server down
The advent of server virtualization, and particularly the migration of traditional OS functions to hypervisors and virtual appliances, will likely have a mixed impact on operating system vendors. Linux and Unix vendors, on the one hand, stand to benefit from the highly efficient packaging and distribution channel that virtual appliances will enable. But none of the OS vendors are thrilled with the increasing adoption of hypervisors, which displace the OS as the controlling layer between hardware and applications. This development threatens Microsoft in particular, since it could cut significantly into the substantial annual revenue stream driven by the Windows Server OS. Microsoft will, of course, have considerable influence on how all of this plays out, as it delivers its own brand of server virtualization to market later this year.
So, while server virtualization has the potential to remove a number of the limitations imposed by traditional operating systems, and ultimately redefine the roles they play in the x86 software stack, much work remains to be done before this vision can become a reality. Next: Rough road for changing operating systems for virtualization
Jeff Byrne is a senior analyst and consultant at Taneja Group, where he focuses primarily on the server virtualization market. Prior to joining Taneja Group, Jeff spent more than five years at VMware as vice president of marketing and later vice president of corporate strategy. Jeff's past experience includes marketing leadership roles at companies such as MIPS, HP and Novell.