6 virtual server management best practices Best practices for Hyper-V template creation

The history of virtualization and its mark on data center management

Virtualization was a huge leap in data center technology with software such as hypervisors and virtual switches that expanded organizations' capabilities and redefined IT.

Virtualization's history has seen emerging technologies such as x86 architecture, mainframes, hypervisors, single-use PCs and virtual switches. IT administrators who understand its history can better implement these technologies in their data centers.

Virtualization technologies enable admins to share computer resources across a wide-range network. Virtualization also helps engineers create intelligent systems that cost less to design, and it improves resource utilization and data center functionality. Virtualization continues to offer unprecedented advantages to modern IT, such as an increase in productivity, faster provisioning and improved disaster recovery and business continuity.

The introduction of virtualization in the late 1960s, early 1970s

Prior to virtualization, engineers and admins relied on key punches, batch jobs and a single OS to satisfy IT operational needs. This was this time-consuming and expensive. In 1964, IBM designed and introduced CP-40, the first mainframe system to rely on time-sharing technology. CP-40 provided computer hardware capable of supporting more than one simultaneous user. Not long after, SIMMON, a software testing tool, and CP-40 began production use, offering the first hypervisor to provide full virtualization.

In 1974, Gerald Popek and Robert Goldberg classified the hypervisor into two types: Type 1 and Type 2. The two types helped distinguish between hypervisors that run on bare metal and those that run on top of an OS. Type 1 hypervisors provide increased security because of their location in the physical hardware, which eliminates the attack surface often found in an OS. Type 2 hypervisors are closely related to the beginnings of x86 virtualization and are generally used for client or end-user systems. This is because Type 2 hypervisors run on top of the host OS, which can lead to latency issues and security risks.

History of virtualization

Virtualization's foray into the IT industry

In the early 1990s, several virtualization companies introduced services and software to better virtualize admins' workloads and increase data center efficiency. In 1995, Red Hat Software Inc. released the first generally available version of Red Hat Commercial Linux, which provided an OS based on the Linux kernel.

In 1998, VMware was founded by Diane Greene, Mendel Rosenblum, Scott Devine, Ellen Wang and Edouard Bugnion and the following year, the company released its first product: VMware Workstation. Workstation provided a Type 2 hypervisor that ran on x86 versions of Windows and Linux OSes, which enabled admins to set up VMs on a single machine. Each VM could then replicate and execute its own OS simultaneously. Later, VMware offered support for x64 versions as well.

Virtualization continues to make waves in the 2000s

With many virtualization companies such as VMware, Red Hat, IBM and Citrix Systems well established, virtualization began to take off in the 2000s. Admins had different services and software to choose from to virtualize their data centers and hardware components, such as virtual CPUs, memory, storage and network adapters.

The early 1990s saw the onset of several virtualization companies touting services and software to help admins better virtualize their workloads and increase data center efficiency.

In 2001, VMware released ESX 1.0 Server, a Type 1 hypervisor based on the VMkernel OS and geared toward enterprise-level organizations. ESXi runs on bare metal and supports traffic shaping, virtual memory ballooning, role-based security access, logging and audition, GUIs and vSphere PowerCLI.

2003 marked the release of Xen Project, the first open source Type 1 hypervisor. The Xen Project was one of many initiatives to introduce paravirtualization, a technique that enhances virtualization by modifying an OS prior to VM installation. This enables the OS to disregard any attempts to emulate a hardware environment. Rather, it encourages the OS to share resources and collaborate with the system, which can help reduce execution time.

That same year, VMware released VMware Virtual Center 1.0, which introduced the first trial of vMotion. Four years later, VMware introduced Storage vMotion, which enables admins to migrate a live VM's file system from one storage system to another.

Microsoft made its move with Hyper-V in 2008. Previously known as Windows Server Virtualization, Hyper-V is a native hypervisor that helps admins create VMs on x64 and x86 systems running on Windows. Microsoft shipped the first beta version of Hyper-V with a few x86-64 editions of Windows Server 2008, and Hyper-V was officially released to the public in June of that year.

In 2010, Microsoft continued its virtualization journey with Windows Azure. Azure is a public cloud computing service that couples compute, networking and storage resources with analytics to help admins develop and scale new or existing applications in the public cloud.

Docker Inc. made a name for itself in 2013 with the release of its container software. Prior to containers, the VM was the main virtualization instance type. Both VMs and containers enable admins to virtually package and isolate applications for deployment. But containers offer several benefits that VMs don't, such as increased resource efficiency and scalability. This is because containers share the same underlying OS kernel, which results in a single OS and smaller container instances.

How virtualization continues to grow in modern IT

Modern virtualization trends focus more heavily on cloud capabilities, and companies such as Amazon Web Services and VMware have made big strides to this end. In 2017, VMware and AWS initiated a partnership that broke news: VMware Cloud on AWS. VMware's cloud platform is a SaaS product that delivers a vSphere-compatible cloud in an AWS data center. VMware Cloud on AWS enables admins to keep the VMware products they are familiar with, even if they decide to migrate to the public cloud.

In addition, there's an increased emphasis on taking advantage of both VMs and containers. In 2018, Amazon announced Firecracker Open Source Technology for micro VMs. Micro VMs bridge the gap between traditional VMs and containers, enabling admins to realize the benefits of both: better security and reduced overhead.

Amazon also introduced AWS Outposts in 2018. Outposts brings native AWS services, infrastructure and operating models to admins' data centers. It provides services such as Elastic Load Balancing, Elastic Container Service and ECS for Kubernetes.

In 2019, VMware announced its acquisition of Bitfusion, which provides admins with high-performance computing, AI and machine learning capabilities. The company also announced Project Pacific, which is an initiative to rearchitect vSphere to deeply integrate and embed Kubernetes, as well as Tanzu, which provisions new clusters and attaches existing clusters running in multiple environments to help centralize management and operations.

IBM also continues to make strides in the virtualization market, most notably with its acquisition of Red Hat in 2019. This acquisition marks one of the biggest tech acquisitions in history, costing the company $34 billion. IBM plans to incorporate Red Hat into its hybrid cloud division, with hopes to extend its cloud capabilities and become a contender against Amazon and Microsoft.

Next Steps

Full virtualization vs. paravirtualization: What are the key differences?

2021 virtualization trends focus on HCI, Kubernetes

Virtualization trends highlight flexibility, app delivery

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close