How does virtualized connectivity help server workload performance? What are some underlying technologies?
Virtualization is primarily a software technology – installing a layer of abstraction between the workload and its underlying hardware. But software alone isn’t enough to ensure optimum virtualization performance. Chipmakers like Intel and AMD have made great strides in providing native virtualization support in recent processor models, and the support is extending into I/O and connectivity features of the server. Let’s examine some of the key hardware technologies that can boost connectivity performance in today’s virtualized servers.
Traditional server virtualization abstracts the operating systems and workload software from the underlying hardware. However, that same server virtualization increases stress on the server’s I/O capabilities. The storage, network and inter-server traffic from a growing number of virtual machines (VMs) combined to form bottlenecks and lead to wasted CPU time waiting for network access – potentially limiting the practical number of VMs that a server could host, even when adequate CPU and memory resources are available.
I/O virtualization extends the traditional virtualization paradigm by abstracting high-level network protocols from the underlying physical network connections and offloading some network traffic processing tasks from the processor (using features on Ethernet controller chips). Virtualized connectivity relies on a single, high-bandwidth physical network adapter where the bandwidth is assigned dynamically through multiple virtual devices such as virtual network interface cards (vNICs) or virtual host bus adapters (vHBAs), which appear as common NICs or HBAs to the network or SAN.
I/O and connectivity virtualization can boost server performance and simplify I/O hardware requirements while increasing the maximum number of I/O-intensive VMs on a server and improving network resource management. For example, a busy server may require multiple NIC ports across several NIC cards to accommodate the I/O needs of all local VMs. By switching to high-bandwidth I/O virtualization, a single 10 GigE NIC and port (two for resiliency) can lower costs, reduce server power demands, reduce cabling, and demand fewer corresponding switch ports. I/O virtualization is extremely powerful in dense blade systems where a huge number of NIC ports can be replaced with a single virtualized I/O adapter.
As another example, traditional network components rely on discrete physical configuration and inefficient, error-prone provisioning. The abstraction of I/O and connectivity allows greater bandwidth utilization and faster bandwidth provisioning without the errors and manual configuration steps normally associated with traditional network device setups. Software-driven network provisioning can also change dynamically – allowing convenient scalability through dynamic adding or removal of resources as workloads demand.
Hardware vendors like Intel have developed several hardware technologies that facilitate I/O and connectivity virtualization including virtual machine device queues, single root I/O virtualization, and data direct I/O along with native quality-of-service features.
Dig Deeper on Network virtualization
Related Q&A from Stephen J. Bigelow
Many compatibility issues can arise when moving VMs to the public cloud. Watch out for compatibility problems with partitions, OSes and image formats... Continue Reading
To migrate a VM and its dependencies from a local data center to a public cloud, use the forklift method to prepare the VM for migration, deploy the ... Continue Reading
Prepare your VMs with cloud migration best practices that examine how suitable a VM is for migration, what groundwork needs to be prepared for it and... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.