BACKGROUND IMAGE: iSTOCK/GETTY IMAGES

This content is part of the Essential Guide: How to choose the best hardware for virtualization
Definition

server virtualization

Server virtualization is a process that creates and abstracts multiple virtual instances on a single server. A server administrator uses virtualization software to partition one physical server into multiple isolated virtual environments; each virtual environment is capable of running independently. The virtual environments are sometimes called virtual private servers, but they are also known as guests, instances, containers or emulations.

Single, dedicated servers can only run one operating system (OS) instance at a time. Each single dedicated server requires its own OS, memory, central processing unit (CPU), disk and other hardware to properly operate. The single server would also have complete access to all of its hardware resources. Server virtualization, on the other hand, can allow a server to run multiple independent OSes, all with different configurations. Server virtualization also masks server resources, including the number and identity of individual physical servers, processors and operating systems. Server virtualization will instead share the server's resources with the other abstracted virtual instances on that server.

Server virtualization allows organizations to cut down on the necessary number of servers, saving money and reducing the hardware footprints involved in keeping a host of physical servers. Server virtualization also allows for much more efficient use of IT resources. If an organization is afraid of under- or overutilizing their servers, then virtualization may be a good practice to try. An organization can also use server virtualization to move workloads between virtual machines (VMs), cut down on the overall number of servers or to virtualize small- and medium-scaled applications.

Server virtualization can be viewed as part of an overall virtualization trend in enterprise IT that includes storage virtualization, network virtualization and workload management. This trend is one component in the development of autonomic computing, in which the server environment will be able to manage itself based on perceived activity. Server virtualization can be used to eliminate server sprawl, to make more efficient use of server resources, to improve server availability, to assist in disaster recovery, testing and development and to centralize server administration.

How does server virtualization work?

Server virtualization works by partitioning software from hardware using a hypervisor. Hypervisors come in different types and are used in different scenarios. The most common hypervisor -- Type 1 -- is designed to sit directly on a server, which is why it is also called a bare-metal hypervisor. Type 1 hypervisors provide the ability to virtualize a hardware platform for use by VMs. Type 2 hypervisors are run as a software layer atop a host operating system and are used more often for testing/labs.

One of the first steps in server virtualization is figuring out what servers an organization may want to virtualize -- for example, deciding to virtualize a server that doesn't make use of all of its resources may be a good idea, so those unused resources can be reutilized for other tasks. Once the server to be virtualized is chosen, users should monitor the system to determine the performance and resource usage of the physical deployment before sizing a VM. For example, users should monitor resources such as memory, disk usage or microprocessor loads. This information provides an organization with an idea of how many resources can be dedicated to each virtual instance.

Users start the virtualization process with virtualization software. An organization can deploy hypervisors -- such as Microsoft Hyper-V and VMware vSphere -- or use virtualization tools, such as PlateSpin Migrate. Depending on the server, specific parts should be virtualized first, such as supporting applications or hard disks in a database server. After migration, it may be necessary to adjust the resources allocated to a virtual instance to ensure adequate performance.

Advantages of server virtualization

Some advantages of server virtualization include:

  • Live migration
  • Server consolidation
  • Reduced need for physical infrastructure
  • Each virtual instance can run its own OS
  • Each individual instance can act independently
  • Reduced cost on servers
  • Reduced energy consumption
  • Easier to backup and recover from disasters
  • Easer to install or setup software patches and updates
  • Ideal in web hosting

Disadvantages of server virtualization

Disadvantages of server virtualization include:

  • Possible availability issues
  • Possible resource consumption
  • Possible memory overcommits
  • Upfront costs -- considering the virtualization software and an organization's network
  • Software licensing
  • IT staff with experience in virtualization may be needed because a learning curve may be involved
  • Security may also be a concern, especially if a virtual server is on the same physical server as a separate organization

Server virtualization uses and application

Server virtualization should be used to consolidate resources, save money and provide independent environments for software on a single server. As a few practical examples, an IT organization can use server virtualization to reduce the time spent managing individual servers, gain experience before migrating servers to the cloud or use more OSes and applications without needing to add more physical machines. Server virtualization can also be used in web servers as a low-cost way to host web services. It can also provide redundancies in case of any data loss if an organization hosts copies of their data on a virtualized server

3 types of server virtualization

There are three popular approaches to server virtualization: the virtual machine model, the paravirtual machine model and virtualization at the OS layer.

Virtual machines are based on the host/guest paradigm. Each guest runs on a virtual imitation of the hardware layer. This approach allows the guest operating system to run without modifications. It also allows the administrator to create guests that use different operating systems. The guest has no knowledge of the host's operating system because it is not aware that it's not running on real hardware. It does, however, require real computing resources from the host -- so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and manages any executed code that requires additional privileges. VMware and Microsoft Virtual Server both use the virtual machine model.

The paravirtual machine (PVM) model is also based on the host/guest paradigm -- and it uses a virtual machine monitor, too. In the paravirtual machine model, however, the VMM actually modifies the guest operating system's code. This modification is called porting. Porting supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines, paravirtual machines are capable of running multiple operating systems. Xen and Unified Modeling Language (UML) both use the paravirtual machine model.

Virtualization at the OS-level works a little differently. It isn't based on the host/guest paradigm. In the OS-level model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guests. Guests must use the same operating system as the host, although different distributions of the same system are allowed. This distributed architecture eliminates system calls between layers, which reduces CPU usage overhead. It also requires that each partition remain strictly isolated from its neighbors so that a failure or security breach in one partition isn't able to affect any of the other partitions. In this model, common binaries and libraries on the same physical machine can be shared, allowing an OS-level virtual server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-level virtualization.

Hypervisors

A hypervisor is what abstracts an operating system from the underlying computer hardware. This abstraction allows the host machine's hardware to independently operate multiple VMs as guests -- meaning the guest VMs effectively share the system's physical compute resources. Traditionally, hypervisors are implemented as a software layer and are separated into Type 1 and Type 2 hypervisors. Type 1 hypervisors are most commonly used in enterprise data centers, while Type 2 hypervisors are commonly found on endpoints such as PCs.

History

In the 1960s, IBM developed virtualization of system memory, which was the first step that would help lead to server virtualization. In the 1970s, IBM virtualized an OS called VM/370, which was worked on more before becoming z/VM, the first commercial VM. Since then, VMs and server virtualization have gained more and more popularity. VMware released VMware Workstation in 1999, which could virtualize servers on x86 and x64 architectures.

This was last updated in November 2019

Continue Reading About server virtualization

Join the conversation

4 comments

Send me notifications when other members comment.

Please create a username to comment.

What virtualization software would you use?
Cancel
What concerns, if any, should an organization have in the placement of new virtualized servers? That is, an organization use some type of classification scheme to keep servers segregated from one another?
Cancel
I'm no expert but if the software to use the vm stops working then rip.
Cancel
What is your opinion about Ravada Vdi? It's a good vdi server.

http://ravada.upc.edu/
Thanks
Cancel

-ADS BY GOOGLE

File Extensions and File Formats

Powered by:

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close