Benefiting from I/O virtualization

A new form of virtualization, I/O virtualization, can provide greater flexibility and utilization as well as faster provisioning. Expert Scott Lowe explains the potential gains to be made by deploying I/O virtualization on a network.

This Content Component encountered an error

I/O virtualization is a new form of virtualization that is justifiably gaining attention in the data center. It's one of several virtualization offshoots -- such as service virtualization or facilities virtualization -- spawned by the meteoric rise of server virtualization. But what is I/O virtualization, exactly? And what benefits can it bring to the data center?

First of all, I/O virtualization is a valid form of virtualization and not just some vendors' attempt to capitalize on the hot virtualization market. Beware of vendors claiming that their product also provides virtualization. In a lot of cases, the "new" virtualization is just marketing speak.

I/O virtualization is a very new market with some very new players. Therefore, it makes it difficult to discuss real-world scenarios in which I/O virtualization has been used because there just aren't that many. Two companies offering I/O virtualization products, 3Leaf Systems and Xsigo Systems are both very new companies, not too long out of stealth mode. Two more well-known companies, Cisco Systems and Brocade are also working in this space, but their products are not able to fully provide I/O virtualization services until some additional standards are defined.

Let's try to tackle what I/O virtualization is. In this context, I/O virtualization is the abstraction of upper layer protocols from physical connections or physical transport. This is accomplished in a couple of different ways. Some vendors, such as Xsigo and 3Leaf, use the relatively well-established InfiniBand interconnect technology as the physical transport layer, allowing them to leverage InfiniBand's extremely low latency and high bandwidth to carry TCP/IP, Fibre Channel (FC) and other traditional protocols. Other vendors, such as Cisco and Brocade, are banking on future standards such as 10Gb Ethernet or extensions to 10Gb Ethernet like Data Center Ethernet (DCE).

DCE is an in-progress IEEE standard that aims to provide greater efficiency, lower latency, lossless and error-free behavior and consistent behavior to Ethernet networks. These characteristics are common in today's FC-based storage area networks (SANs); bringing such capabilities to Ethernet networks will enable new services and new applications that would not have been previously possible. Fibre Channel over Ethernet (FCoE) is a prime example of this.

As a result of this abstraction, I/O virtualization vendors tout the benefits of greater flexibility, greater utilization and faster provisioning. To understand some of the benefits of deploying I/O virtualization, consider the following example of I/O virtualization deployment: a server virtualization environment using larger, rack-mount servers with about 6-8 network interface controllers (NICs) and two FC connections per server. A minimum of six NICs is typically recommended for VMware environments: for example, two for the Service Console, two for VMotion and at least two for virtual machine (VM) traffic. Deploying an InfiniBand-based I/O virtualization solution would have the following two effects in this environment:

  1. The number of connections required per server would drop from 8-10 (6-8 NICs and 2 FC) to two (two InfiniBand host connectivity adapters). This helps reduces cabling costs and complexity.
  2. The infrastructure port count, that is the number of Gigabit Ethernet and/or FC switch ports, would be reduced. The exact amount of reduction would depend upon how much oversubscription is architected into the system. If we assume that the second Service Console connection, the second VMotion connection and the second FC connection exist for redundancy only (a reasonably safe assumption), then we could, at the very least, reduce the infrastructure port count by two Gigabit Ethernet ports and one FC port per server. In a server farm of eight servers, that's 16 Gigabit Ethernet ports and eight Fibre Channel ports saved. A more thorough analysis of bandwidth utilization might show that there is room for even greater infrastructure port count reduction. Consider that many 4Gb FC links are not heavily utilized and that many Gigabit Ethernet interfaces are lightly utilized. In these cases, using I/O virtualization could result in significant reductions in the infrastructure port count.

These are just two potential effects of deploying I/O virtualization. Other potential benefits, as well as a couple deployment scenarios, will be the subject of a future article.

About the author: Scott Lowe is a senior engineer for ePlus Technology, Inc. He has a broad range of experience, specializing in enterprise technologies such as storage area networks, server virtualization, directory services, and interoperability.
 

This was first published in April 2008

Dig deeper on Network virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close