One of the most important new features that Microsoft included in Hyper-V 3.0 is virtual Fibre Channel. Prior to the release of Windows Server 2012 and Hyper-V 3.0, physical servers that depended on Fibre Channel-based storage connectivity were typically considered poor candidates for virtualization. However, Hyper-V’s virtual Fibre Channel feature makes it possible for a virtual machine to communicate directly with a Fibre Channel storage area network.
Laying the groundwork for virtual Fibre Channel
Hyper-V hosts almost always operate as a part of a cluster to provide fault tolerance for VMs. VMs that make use of the virtual Fibre Channel feature can be made fault tolerant (and can be live migrated), but only if the destination host contains a HBA that can be used to maintain Fibre Channel connectivity.
VMs often have differing storage connectivity requirements. For example, two VMs might reside on the same host server, but need to connect to two completely different SANs. The host server's physical hardware must provide this connectivity.
Even if your organization does not make use of multiple SANs, it is common for mission critical servers to make use of multipath I/O. Doing so allows for multiple paths to a common storage target, preventing a HBA or a Fibre Channel switch from becoming a single point of failure. However, if you want your VMs to have true multipath I/O capabilities, then you must implement multipath I/O at the hardware level on the host server.
Once you have established the underlying hardware level connectivity, the VMs running on the host can make use of the Fibre Channel architecture. Hyper-V defines a virtual SAN as a collection of physical HBA ports. In essence, you could create a separate virtual SAN for each HBA. Of course, if a host server contains a HBA with multiple sets of ports, then you could create multiple virtual SANs.
Linking a VM to a virtual SAN
Virtual SANs are defined at the hypervisor level. To connect a virtual server to a Fibre Channel port, you must go to the VM's settings screen and use the Add Hardware option to add a Fibre Channel adapter. Upon doing so, you will be prompted to specify the virtual SAN that the virtual Fibre Channel adapter will connect to.
The process of linking a virtual server to a virtual SAN, which in turn connects the virtual server to a physical Fibre Channel port, is relatively straightforward. Even so, there are some limitations. First, the virtual server must run a compatible operating system. The only guest operating systems that Microsoft supports for use with virtual Fibre Channel are Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012.
Another limitation you need to be aware of is that regardless of the number of virtual SANs defined on a host server, each virtual server can accommodate a maximum of four virtual Fibre Channel adapters. This holds true regardless of whether the virtual Fibre Channel adapters connect to the same virtual SAN or multiple virtual SANs.
One last consideration is the volume of I/O requests that will be handled by each HBA. If multiple VMs share a virtual SAN, then the physical HBAs linked to the virtual SAN must be able to accommodate traffic from all of the VMs.
If you find that the volume of I/O requests is too high for the Fibre Channel hardware to handle, then you might consider implementing multipath I/O for a virtual SAN. When you do, the host server will perform dynamic load balancing of I/O requests across the physical ports included within the virtual SAN. This helps to prevent some ports from becoming saturated while others are underutilized.
As you can see, there are a number of hardware considerations that must be made in preparation for using virtual Fibre Channel. Even so, many of these preparations are geared toward provisioning fault tolerance and scalability and really aren't that different from the types of planning that might be required in a physical data center.
This was first published in June 2013