The first article of this three-part series covered Hyper-V high-availability hardware considerations. The next...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
step is to integrate those pieces to create a Hyper-V high-availability storage arrangement.
Creating a Windows Failover Cluster requires the careful connection of servers, storage and networking. While there are several available options to accomplish these tasks, few make sense for Hyper-V high- availability environments. In this article, you'll learn about the less-known Hyper-V high-availability storage considerations.
For the neophyte cluster administrator, one of the most important Windows Failover Cluster considerations involves storage. As discussed in part one of this series on Hyper-V high availability, any cluster that requires failover resources needs a minimum of two hosts connected to an area of shared storage.
High-availability storage: iSCSI or Fibre Channel?
Typically, today's high-availability storage comes in one of two flavors: Fibre Channel and iSCSI storage area networks (SANs). Traditional Fibre Channel SANs provide a high level of performance but require specialized hardware for connecting servers to storage.
Next, there are iSCSI SANs, which achieve similar levels of performance compared with their Fibre Channel counterparts for the kinds of processing required by virtual machines (VMs). Additionally, server and storage connections in iSCSI SANs use traditional copper networking cables. As a result, your existing network infrastructure can support storage traffic in the same way it supports your networking traffic.
Because of this benefit, iSCSI SAN connections may seem like the obvious solution for your Hyper-V high-availability storage needs. But you must account for the following traffic considerations:
- Network segregation. Storage traffic generally occurs at a significantly greater rate than traditional network traffic. Subsequently, a storage network connection tends to have much greater utilization than a regular network connection. For this reason, it's a best practice not only to segregate storage networking onto different network cards but also to place storage networking on its own network path.
- By isolating your storage, storage network oversubscription -- if it should happen -- won't affect your regular networking.
- Network security. The fact that storage connections can share existing network hardware also introduces the chance of data exposure.
- When using the iSCSI storage networking protocol, give added care to securing that network connection. You can achieve this through authentication and the commonly used Challenge-Handshake Authentication Protocol. Also, encrypting the data between storage and server is another way to secure network connections. This process, however, tends to have a significant effect on storage performance and is not common.
- Do not use NIC card teaming for storage connections. While it's possible to use traditional network interface card (NIC) teaming drivers to aggregate network cards, it's not advised for storage connections.
- Aggregating network connections for iSCSI is more appropriately accomplished using either the Multiple Connections per Session (MCS) or multipath I/O (MPIO) protocols. These protocols offer similar overall performance levels (with MCS providing a marginally better performance), but some storage devices do not support MCS. Further, MPIO supports different traffic load balancing policies on a per-logical-unit-number basis, which is necessary when different policies are required.
In reality, a SAN decision boils down to a company's hardware infrastructure. If you have invested in a Fibre Channel infrastructure, exploiting that existing equipment for VM purposes provides the best return on investment.
Cluster Shared Volumes for high-availability storage
Once you've chosen your high-availability storage type, the next step is to connect the storage to the hosts. It's important to recognize that Hyper-V has the new Cluster Shared Volumes (CSV) feature, which debuted with Windows Server 2008 R2.
Prior to CSV, VMs that were colocated on the same logical unit number, or LUN were forced to fail over as a group. Now with CSV, it's feasible to create a small number of LUNs, each containing a large number of VMs. Those VMs, in a CSV-enabled environment, have the ability to fail over individually.
Cluster Shared Volumes is a great addition to Hyper-V, but an often-overlooked consideration about this feature is: Do your add-on technologies work with it?
Remember: CSV is a relatively new technology that changes the way servers interact with storage. As a result, other technologies -- such as backup and restore tools -- may not be CSV-aware.
In the end, when architecting Hyper-V high-availability storage, pay special attention to CSV. Before turning it on, make sure that all of your additional services -- particularly backup and restore tools -- fully support the technology.