Virtualization is radically modifying the way we think about computing resources usage. In 2006, it was rapidly adopted by companies that typically have not been early IT adopters. While that scenario shows that virtualization's benefits are compelling, it also sets up some dangerous situations for IT shops.
Virtualization is a vague term and can refer to several approaches of abstraction. It's critical that we understand the differences, recognize abuse in terminology and have an idea of how virtualization will change infrastructures in the next decade.
In this tip, I discuss the ways marketers misuse virtualization and mislead the public. In part two, I look at two terms and technologies -- server virtualization and operating system (OS) partitioning – that have been described in many different ways. In the next instalment, I cover other abused terms, application virtualization and storage virtualization.
I think virtualization is the most significant change in the IT world in a decade. Gaining profit from the word itself is an irresistible temptation. More and more companies are using the word virtualization to describe existing and new technologies, technologies that often have little to do with virtualization.
The most evident case is given by security companies starting to mention terms like network virtualization or in some cases security virtualization, where unscrupulous marketers are trying to re-label a set of technologies usually comprised under the umbrella name of endpoint security until yesterday. So, established network features like Virtual Local Area Networks (VLANs), offered in several kind of switches since years, or Quarantine Networks, foundation of endpoint security first attempts, are now brand new capabilities ready to boost products launches.
Another example is application virtualization, which is often confused with the known concept of thin computing. In thin computing, hosted applications are remotely accessed by low powered clients. While application virtualization finds a natural fit in thin computing scenarios, they definitively are different technologies.
Probably the space where the term virtualization is most abused is in storage: marketers are using the term virtualization to describe blocks, disks (the dear old RAID), file systems, tape drives, etc.
While it's true that many features abstract aspects of computing at some levels, changing those features' name and selling them as new technologies is just a way to confuse customers, which should carefully evaluate how and where the term virtualization is used.
Server virtualization is the most mature of today's recognized forms of virtualization. This technology abstracts all the hardware components of physical servers, including processors, memory, networking, mass storage devices, et cetera. It offers a complete level of isolation, called a virtual machine, on which you can install several kinds of operating systems, or guests. A powerful enough physical server, or host, can concurrently serve tents full of virtual machines.
Server virtualization has a huge list of benefits, as illustrated in the following table:
|Reduces the total number of physical servers that need to be purchased, updated, replaced, powered, maintained;||Reduces the occupied space and power consumption;|
|Reduces downtimes by creating cheap fail-over architectures;||Uses computing resources more efficiently and allocates as much resources as possible per host;|
|Reduces time needed to deploy new, even complex software configurations;||Reduces time needed to adopt new products;|
|Moves legacy programs into virtual machines;||Shares and migrates work environments;|
|Enforces security isolation.|
In short, server virtualization means significant money savings and highly improved efficiency.
Server virtualization was initially marketed for two purposes: handling legacy platforms and products (which often are what prevents companies from adopting new technologies), and for achieving server consolidation. But today, virtualization serves many more purposes, from software development to simplifying testing and quality assurance processes to security to lowering costs for disaster recovery and helping in intrusion detection.
The current market is divided between VMware -- which was acquired two years ago by EMC Corporation -- and Microsoft, which entered the space in 2002 after acquiring Connectix. A third player called Parallels arrived in 2005 and is obtaining customers because of its virtualization solution on the new Apple Mac OS X for Intel architecture. But Parallels still has to demonstrate that it can compete on the server side.
Some criticize virtualization for the performance degradation it introduces; in some cases, the performance loss outweighs the potential benefits of virtualization. The first approach to mitigate the loss has been called paravirtualization, which was based the simple idea of modifying guest operating systems' kernels to assist in hardware abstraction. But this approach raises notable technical problems. Every new operating system version has to be re-manipulated for virtual environments, with evident delays in availability and uncertainty in reliability.
And not every vendor grants access to its source code for paravirtualization modifications. This last reason is why the famous open source project Xen, and some commercial competitors like Virtual Iron, have failed until recently to penetrate the market. Without the ability to paravirtualize Windows, which is closed source and not applicable for paravirtualization, the large majority of companies don't have a real chance to embrace the technology.
One way to address obstacles and escape this blind corner has been offered by the two major CPU makers, AMD and Intel, which have recently introduced virtualization capabilities inside processors. Coordinated by virtualization products, CPUs can transparently execute guest operating systems in such a way that they have complete control of the hardware, at the same level as the host OS, which enables more speed in paravirtualization and a further level of virtual machines isolation.
Recent virtualization platforms rely heavily on these new capabilities and will make the most of them in the next generations. The purchase of new hardware, both servers and desktops, should be influenced by the availability of these CPU enhancements.
Be forewarned: Today's server virtualization products are mature enough to offer cost-effective and reliable solutions for the large majority of enterprises. However, as soon as those companies deploy wider virtual infrastructures, a completely new class of problems will arise. Here the market is still failing to offer qualified solutions, solutions that largely involve more sophisticated automation.
Operating system partitioning is another approach from hardware abstraction that is gaining some momentum. A concept already known among Linux users working with UML (User Mode Linux), OS partitioning works by creating instances of the operating system and sharing a common software package, but remaining able to independently install new components and have separate networking properties on each portion.
Some niches, like Web hosting, are turning their interest to this approach. This is mainly thanks to the company SWsoft, which has been savvy enough to offer the same technology both as a commercial grade product (Virtuozzo) and as an open-source, scaled-down solution (OpenVZ).
Potential customers frequently examine the differences between traditional server virtualization (hardware abstraction) and OS partitioning. Virtualization provides a higher level of security, but OS partitioning requires less management effort, permitting administrators to patch the operating system or install new common applications just once.