Tip

Hyper-V high-availability storage considerations

The first article of this three-part series covered Hyper-V high-availability hardware considerations. The next step is to integrate those pieces to create a Hyper-V high-availability

    Requires Free Membership to View

storage arrangement.

Creating a Windows Failover Cluster requires the careful connection of servers, storage and networking. While there are several available options to accomplish these tasks, few make sense for Hyper-V high- availability environments. In this article, you'll learn about the less-known Hyper-V high-availability storage considerations.

More on Hyper-V high availability
Fixing virtual machine cluster problems in Hyper-V

Hyper-V clustering and VM configuration problems

Cluster performance problems in Hyper-V and how to fix them

Killing Hyper-V high-availability cluster services and network issues

For the neophyte cluster administrator, one of the most important Windows Failover Cluster considerations involves storage. As discussed in part one of this series on Hyper-V high availability, any cluster that requires failover resources needs a minimum of two hosts connected to an area of shared storage.

High-availability storage: iSCSI or Fibre Channel?

Typically, today's high-availability storage comes in one of two flavors: Fibre Channel and iSCSI storage area networks (SANs). Traditional Fibre Channel SANs provide a high level of performance but require specialized hardware for connecting servers to storage.

Next, there are iSCSI SANs, which achieve similar levels of performance compared with their Fibre Channel counterparts for the kinds of processing required by virtual machines (VMs). Additionally, server and storage connections in iSCSI SANs use traditional copper networking cables. As a result, your existing network infrastructure can support storage traffic in the same way it supports your networking traffic.

Because of this benefit, iSCSI SAN connections may seem like the obvious solution for your Hyper-V high-availability storage needs. But you must account for the following traffic considerations:

  • Network segregation. Storage traffic generally occurs at a significantly greater rate than traditional network traffic. Subsequently, a storage network connection tends to have much greater utilization than a regular network connection. For this reason, it's a best practice not only to segregate storage networking onto different network cards but also to place storage networking on its own network path.
  • By isolating your storage, storage network oversubscription -- if it should happen -- won't affect your regular networking.
  • Network security. The fact that storage connections can share existing network hardware also introduces the chance of data exposure.
  • When using the iSCSI storage networking protocol, give added care to securing that network connection. You can achieve this through authentication and the commonly used Challenge-Handshake Authentication Protocol. Also, encrypting the data between storage and server is another way to secure network connections. This process, however, tends to have a significant effect on storage performance and is not common.
  • Do not use NIC card teaming for storage connections. While it's possible to use traditional network interface card (NIC) teaming drivers to aggregate network cards, it's not advised for storage connections.
  • Aggregating network connections for iSCSI is more appropriately accomplished using either the Multiple Connections per Session (MCS) or multipath I/O (MPIO) protocols. These protocols offer similar overall performance levels (with MCS providing a marginally better performance), but some storage devices do not support MCS. Further, MPIO supports different traffic load balancing policies on a per-logical-unit-number basis, which is necessary when different policies are required.

In reality, a SAN decision boils down to a company's hardware infrastructure. If you have invested in a Fibre Channel infrastructure, exploiting that existing equipment for VM purposes provides the best return on investment.

Hyper-V high availability
In this three-part series on Hyper-V high availability, I'll explain how to successfully deploy a highly available environment. Its architecture will support your needs for automated virtual machine failover, the successful storage and processing of VMs, and even expand to become a multi-site, disaster-resistant, fully-automated infrastructure -- all for not a lot of money.

Cluster Shared Volumes for high-availability storage

Once you've chosen your high-availability storage type, the next step is to connect the storage to the hosts. It's important to recognize that Hyper-V has the new Cluster Shared Volumes (CSV) feature, which debuted with Windows Server 2008 R2.

Prior to CSV, VMs that were colocated on the same logical unit number, or LUN were forced to fail over as a group. Now with CSV, it's feasible to create a small number of LUNs, each containing a large number of VMs. Those VMs, in a CSV-enabled environment, have the ability to fail over individually.

Cluster Shared Volumes is a great addition to Hyper-V, but an often-overlooked consideration about this feature is: Do your add-on technologies work with it?

Remember: CSV is a relatively new technology that changes the way servers interact with storage. As a result, other technologies -- such as backup and restore tools -- may not be CSV-aware.

In the end, when architecting Hyper-V high-availability storage, pay special attention to CSV. Before turning it on, make sure that all of your additional services -- particularly backup and restore tools -- fully support the technology.

Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.


This was first published in April 2010

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.