Comparing I/O virtualization products

Finding the I/O virtualization approach that is appropriate for your environment depends on your specific business needs. In part three of this series on virtualizing I/O, contributor Scott Lowe discusses the differences between various I/O providers' technologies, from VirtualConnect's abstraction of Mac addresses to 3Leaf and Xsigo's virtualization of physical transports to NextIO's approach, which virtualizes PCI Express.

In part three of this series on virtualizing I/O, we discuss options for I/O virtualization by comparing various I/O providers' technologies. There are several approaches, from VirtualConnect's abstraction of Mac addresses to 3Leaf and Xsigo's virtualization of physical transports to NextIO's approach, which virtualizes PCI Express. This tip outlines the various methods and their respective pros and cons.

VirtualConnect
Along with the introduction of its c7000 blade chassis, HP introduced VirtualConnect. VirtualConnect provides a form of I/O virtualization that masks the MAC addresses and Fibre Channel WWNNs of blades inside the chassis. This provides administrators the ability to easily move equipment around inside the chassis without necessarily needing to reconfigure network or SAN connections as a result. Administrators can define server profiles which control the mapping between network interface cards (NICs) and external networks, and VirtualConnect can potentially allow any defined network to travel across any uplink.

3Leaf, Xsigo
Rather than abstracting just MAC addresses and Fibre Channel WWNNs but continuing to use the same physical transports, some vendors choose to virtualize the underlying physical transport as well. Two examples of this are 3Leaf and Xsigo, both of whom using InfiniBand as the underlying physical transport for Ethernet and Fibre Channel traffic. Although both of these vendors share the use of InfiniBand, their products are quite different.

I/O virtualization series
Check out the previous articles in Scott Lowe's series on I/O virtualization here on SearchServerVirtualization.com:

Benefiting from I/O virtualization

Maximizing I/O virtualization

3Leaf's solution combines the use of InfiniBand host connectivity adapters (HCAs) and InfiniBand switches to create a redundant fabric upon which Ethernet and Fibre Channel traffic will travel. This InfiniBand fabric is joined to a software solution that runs on industry-standard servers (only HP DL380 G5 servers right now, with other platforms to follow soon) that provide the uplinks to external Ethernet networks and Fibre Channel-based SANs.

Aside from virtualizing the Ethernet and Fibre Channel fabric, 3Leaf's solution also provides the ability to create a dynamic computing environment that allows users to repurpose and reprovision servers on the fly. This makes the 3Leaf solution very different from other "pure" I/O virtualization.

Xsigo Systems uses InfiniBand as well but provides a very different architecture. With 3Leaf, the InfiniBand switches are separate from the hardware on which the 3Leaf software runs and where the uplinks are found to the external networks. Xsigo combines the InfiniBand switches and the uplinks into a single chassis. Servers connect to the chassis via InfiniBand HCAs, and the chassis connects to Fibre Channel-based fabrics and Ethernet networks via a series of I/O modules.

Both these products virtualize the Fibre Channel and Ethernet traffic onto an InfiniBand-based fabric, and both solutions provide mobility of the connections between servers and external resources. The means whereby they achieve that goal, as you've seen, is quite different.

NextIO
NextIO takes an entirely different approach. Rather than virtualizing the physical transport (for example, by running Ethernet or Fibre Channel over InfiniBand), NextIO virtualizes PCI Express (PCIe) at the hardware level. This allows NextIO to virtualize practically any PCIe-based resource, whereas most other vendors are limited to Ethernet and Fibre Channel. NextIO's product is closely tied to the PCI SIG's work on Single-Root I/O Virtualization (SR-IOV) and Multi-Root I/O Virtualization (MR-IOV), which provide the underlying framework for PCIe-based devices to be virtualized and shared among multiple servers.

Cisco, Brocade
Finally, there are a couple of vendors who are basing their I/O virtualization solutions on the trend toward a converged physical transport. Both Cisco and Brocade see the use of 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and converged network adapters (CNAs) as a means to provide I/O virtualization for Ethernet and Fibre Channel. Their products are less about I/O virtualization and more about taking advantage of advances from standards bodies for a converged fabric and new protocols for that converged fabric.

As you can see, there are a variety of I/O virtualization products that take a number of different approaches. Each of these approaches has its own set of advantages and disadvantages. Organizations should take time to carefully document the business needs and map them to the best I/O virtualization solution for their specific needs.

About the author: Scott Lowe is a senior engineer for ePlus Technology, Inc. He has a broad range of experience, specializing in enterprise technologies such as storage area networks, server virtualization, directory services, and interoperability.
 

This was first published in September 2008

Dig deeper on Network virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close