Essential Guide

How to choose the best hardware for virtualization

A comprehensive collection of articles, videos and more, hand-picked by our editors

Maximizing I/O virtualization

Use I/O virtualization to consolidate virtual server workloads and reduce port and NIC overcrowding with this tip.

FROM THE ESSENTIAL GUIDE:

How to choose the best hardware for virtualization

GUIDE SECTIONS

  1. CPU and RAM
  2. Networking
  3. Storage
+ Show More

Input/output (I/O) virtualization gives IT pros ways to maximize the benefits of server virtualization, streamlining provisioning and reducing the number of network interface cards (NICs) and ports used. In this tip, I describe how to work with virtual NICs and other processes, following up on my tip on basic I/O virtualization concepts.

Utilizing virtual I/O
An effective I/O virtualization strategy requires thinking differently about virtualization. In many ways, this is very similar to the shift in philosophy that was required when organizations first began to embrace server virtualization. In both cases, each creatively utilizes resources by pooling those that are unused.

Organizations and users just getting started with server virtualization have to become comfortable with the idea of consolidating multiple workloads on a single physical server which allows tasks to share physical resources. Similarly, organizations and users investigating I/O virtualization must break away from old ways of thinking with regard to provisioning I/O resources.

Before proceeding any further, it's necessary to define some terminology.

We'll use the term virtual network interface card (vNIC) to denote a virtual NIC presented to the virtualization host. Each of these vNICs will map to a physical network port or to a group of physical network ports. Multiple vNICs may map to the same physical network port(s). Likewise, a virtual host bus adapter (vHBA) is a virtual HBA presented back to the virtualization host. These vHBAs are mapped to physical Fibre Channel ports. As with vNICs, multiple vHBAs may map to the same physical port(s).

In many data centers which use server virtualization, servers are provisioned with six, eight or more network interface cards (NICs). Why so many NICs? In a typical VI3 deployment, NICs might be configured like this:

  • 2 NICs for the Service Console (to provide redundancy)
  • 2 NICs for the VMotion network (again, to provide redundancy)
  • 2 NICs for the virtual machines themselves

This is a fairly common configuration, but are all these NICs necessary? Not really. They're present for redundancy, even if they will never get used in the normal course of operation. The extra Service Console and VMotion NICs in particular are very likely to remain inactive and unused throughout the life of the server in the data center.

Users familiar with server virtualization but unfamiliar with I/O virtualization will start creating vNICs and binding them unnecessarily to physical network ports in much the same way as they would create and configure network ports on a traditional virtualization host. How much traffic does an ESX Service Console really need? How many Service Consoles can I run on pair of Gigabit Ethernet links?

If a typical ESX Service Console generates 50Mbps of traffic, an organization could easily combine ten Service Console connections on a single Gigabit Ethernet link with plenty of bandwidth to spare. Just as server virtualization is about combining under-utilized workloads across multiple physical servers, I/O virtualization is about combining under-utilized I/O connections across multiple servers.

Implementing I/O virtualization
Let's take a look at a concrete example. Although this example focuses on the use of an Xsigo VP780 I/O Director in conjunction with VMware Infrastructure 3 (VI3), the concepts should be similar for other I/O virtualization solutions and other server virtualization solutions.

Consider a data center with 10 ESX hosts connected to a VP780 I/O Director. Without an I/O Director, approximately 60 Gigabit Ethernet ports would need to be configured (6 ports per ESX host). But how many would we need with an I/O Director?

Let's assume that the Service Console connection for each ESX host generates an average of 60Mbps of traffic. This means that we could combine all 10 Service Console connections, an aggregate of 600Mbps of traffic, onto a single Gigabit Ethernet port. Add a second Gigabit Ethernet port for redundancy, spread the traffic across the two and each physical port will carry approximately 300Mbps of traffic.

Each ESX server would have two vNICs defined, vNIC1 and vNIC2. Each vNIC would be mapped to one of the two Gigabit Ethernet ports on the VP780 chassis, and each vNIC would serve as one of two uplinks to vSwitch0, the vSwitch hosting the ESX Service Console connection. Half of the ESX servers would use vNIC1 as the primary uplink for their vSwitch; the other half would use vNIC2. This configuration meets the performance requirements, provides redundancy and still reduces overall port count by 90% -- down from 20 connections to only two connections.

The network connections for VMotion are handled in much the same fashion, but this time another factor must be considered. Whereas the Service Console traffic was primarily directed from the Service Console connections to other servers on the network, the VMotion traffic is almost exclusively within the ESX server farm. This means that we can take advantage of a feature called "inter-vNIC switching", where traffic between two vNICs on the same I/O card and on the same virtual local area network (VLAN) will be switched internally within the VP780 chassis and won't go out on the network. This means that even fewer Gigabit Ethernet connections are needed. If 75% of the VMotion traffic is between hosts connected to the I/O Director, then we've immediately reduced our needed network connections from 20 (10 servers at 2 connections each) down to only 5 connections -- and that's not even taking into consideration how often VMotion occurs.

This would be implemented as two vNICs, vNIC3 and vNIC4, for each ESX server. These vNICs would be bound across five physical Gigabit Ethernet ports on the VP780 and configured as uplinks to a vSwitch, vSwitch1, which hosts the VMkernel port for VMotion. Because the vNICs are terminated on the same I/O card and are on the same VLAN, inter-vNIC switching automatically keeps the majority of the traffic off the Gigabit Ethernet uplinks.

Virtual machine uplinks
So far, we reduced our port count from 40 ports -- 20 for Service Console, 20 for VMotion -- down to only seven ports, a reduction of about 82%.

With virtual machine uplinks, even if we assume 50% utilization, which is a very generous utilization figure, we can reduce the port count down from 20 connections to 10 connections, and still provide redundancy. This would be implemented as a pair of vNICs for each ESX server, with vNICs for two different servers bound to the same Gigabit Ethernet ports.

All in all, by understanding the I/O requirements across our ESX server farm and consolidating I/O workloads, we've been able to reduce the total port count from 60 Gigabit Ethernet ports down to only 17 ports, a total reduction of about 72%. That's pretty impressive.

As you can see, the key to effectively using I/O virtualization in your environment means knowing and understanding the I/O requirements of the servers while carefully considering how these I/O workloads can be consolidated. By employing a shift in thinking about the provisioning of I/O resources, organizations can see the same kinds of efficiencies and cost reductions with I/O resources as they have with server virtualization.

About the author: Scott Lowe is a senior engineer for ePlus Technology, Inc. He has a broad range of experience, specializing in enterprise technologies such as storage area networks, server virtualization, directory services, and interoperability.

This was first published in July 2008

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Essential Guide

How to choose the best hardware for virtualization

GUIDE SECTIONS

  1. CPU and RAM
  2. Networking
  3. Storage

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close