Problem solve Get help with specific problems with your technologies, process and projects.

Using subnets to isolate network traffic in Hyper-V

In a Hyper-V environment, isolating network traffic through a series of subnets and network interface cards can prevent against failures and security breaches.

When consulting clients express interest in Hyper-V servers, I recommend implementing as many network interface...

cards (NICs) as their budget allows. Typically, if the goal is to centralize storage through a Fibre Channel storage area network (SAN), I suggest a minimum of four NICs per server. If clients use an iSCSI SAN, I recommend a minimum of six. But it's not unusual to suggest 10 NICs for each server.

The reason is this: Many Hyper-V server administrators require the extra network segregation that additional NICs bring. Admins also like the potential for link aggregation among both storage and production network connections. But while Hyper-V enables support for virtual local area network (VLAN) trunking – a method to support multiple VLANs that have members on more than one switch. -- the challenges of this setup can be more political than technical.

Typically, when VLANs are trunked to Hyper-V servers, the network management responsibility shifts to the virtualization administrator. It's not uncommon for network admins to be hesitant to lose this responsibility, and security managers understandably fear that a group of "nonspecialists" (i.e., virtual administrators) are now accountable for a part of the environment in which they have little proficiency.

Furthermore, the potential for administrative errors increases when multiple VLANs are trunked together. As more server admins try to consolidate multiple subnets – and, therefore, security zones – major problems can arise.

Using subnets to isolating network traffic Additionally, separating NICs into different subnets not only follows Microsoft's suggested guidelines but also prevents against a possible failure down the road. Microsoft's Failover Cluster service is particularly intolerant of heartbeat latency (a heartbeat is a required set of communication between nodes that a cluster uses to recognize when nodes go offline). When a server can't send or respond to cluster heartbeats in a timely manner, it can cause resource or cluster failures. By isolating cluster heartbeats through their own connection, this scenario can be prevented.

To illustrate my point, here is a real-world example of how to correctly segregate network connections. A client of mine purchased two Hyper-V servers for deploying VMs across three subnets:

  • Subnet A was a production network containing traditional office servers like Exchange, SQL, and file servers;
  • Subnet B was an operations network for their line-of-business servers. This business-critical group needed a few extra network protections; and
  • Subnet C contained a testing and staging sandbox for developers.

To properly connect each subnet, they required six NICs for the following:

  • One NIC was used explicitly for management traffic to the Hyper-V servers. Live Migration traffic also traveled on this interface.
  • One network card provided the cluster's heartbeat connection. This connection resided on its own subnet and ensured that network congestion would not result in a cluster failure.
  • Two NICs were connected to the iSCSI SAN through a multipath I/O setup.
  • Two network cards were physically connected and logically configured to the network IOS as a bonded connection for passing VLAN traffic to subnets A, B, and C.

This is an acceptable configuration because it segregates each traffic type into its own zone. iSCSI traffic, for instance, routes through a storage network using isolated paths. This setup ensures that production network congestion cannot impact access to the server hard drives.

Creating a separate connection for management traffic, on the other hand, accomplishes two things. First, it physically separates virtual machine (VM) traffic from Hyper-V management traffic, which is a good security practice. Also, it prevents a VM from overconsuming a network connection, which can inhibit server management.

The final pair of NICs were designed for link aggregation. While this configuration is the source of substantial discussion on IT blogs, I explained -- in part one of this series -- Microsoft's position on NIC teaming support (Be aware that Microsoft's use of the Microsoft Virtual Network Switch Protocol may prevent some NIC teaming drivers from functioning. Check with your server vendor before attempting Hyper-V NIC teaming.).

Potentially, my client could have trunked all three production VLANs into the same interface pairing. Despite the low traffic and relatively small Hyper-V host server counts, the system administrators segregated each VLAN's traffic further by routing it through individual interfaces, rather than trunking them.

Their reasoning: They wanted to prevent mistakes.

At times, Hyper-V's VLAN wizards are difficult to navigate. First, you must create and assign the right parameters to virtual switches in Virtual Network Manager. Next, ensure that the correct physical interfaces are connected to the right virtual switches. Since Hyper-V's management wizards lack a single-pane graphical interface (such as in VMware's vSphere), mistakes are easily made.

While Hyper-V or Cisco's routing protocols have data leakage issues with VLANs, flipping a VM's interface among VLANs was considered too great of a risk. Therefore, they leveraged all 10 NIC channels because "it was just easier."

In the last installment of this three-part series, I'll explain the creation process for virtual switches. You may find this a cumbersome procedure -- one that will most likely require diligent note taking.

About the expert

Greg Shields
 

Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.


 

This was last published in November 2009

Dig Deeper on Network virtualization

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close