freshidea - Fotolia

Tip

Three steps to avoid server NIC teaming problems

NIC teaming can help reduce resource contention and improve VM performance, but don't fall victim to common mistakes when setting it up.

The concept of network interface card teaming has been around for quite some time, but until somewhat recently, building a NIC team required specialized hardware. In Windows Server 2012, Microsoft made it possible to build a NIC team using commodity hardware. By doing so, you can create a logical NIC that has the aggregate bandwidth of all of the included hardware NICs.

Server NIC teaming can be tremendously beneficial in virtual server environments. After all, one of the biggest problems in virtualized environments is that of resource contention. In other words, multiple virtual servers have to share limited physical-hardware resources. In some cases, this sharing of resources can cause a physical network adapter to become a choke point.

One solution to this problem is to create a series of virtual switches, each linked to a separate physical network adapter. The problem with this approach is that it makes virtual server management more difficult, because administrators must make sure that network traffic is being divided evenly across the virtual switches. An easier solution to this problem is to build a NIC team and link it to a single virtual switch. That way, all of a host server's VMs can share a common virtual switch, but network traffic can be automatically load balanced across multiple physical NICs.

On the surface, it would seem that NIC teaming is absolutely ideal for use on host servers. If, however, the host servers do not adhere to some basic best practices, then server NIC teaming can introduce a number of different problems into the virtualization infrastructure.

Before I describe these best practices, I want to point out that although this article uses Microsoft terminology, the best practices mostly hold true regardless of which hypervisor your organization is using. Not every best practice can be applied to non-Microsoft environments exactly as written, but the basic concepts hold true across the board.

Step one: Build in redundancy

So, with that said, the first best practice is to build redundancy into your NIC team. Suppose for a moment that your host server has five 10 Gigabit ports. It might be tempting to use all five ports, so that you can get the equivalent of a 40 Gigabit connection. The problem with this approach is that if any of the NICs were to fail, the entire NIC team could possibly fail as a result -- depending on which vendor's product you are using. As such, it is a good idea to designate at least one NIC within the team as a hot spare. This NIC will be automatically used in the event of a NIC failure.

Step two: Segregate traffic types

A second best practice is to beware of dedicating all of your server's NICs to the NIC team -- even if some of the NICs are designated as hot spares. The reason for this is that a virtualization host has to be able to handle lots of different types of traffic. If you dedicate all of the servers NICs to a NIC team, then all traffic types will have to flow through that NIC team. If left unchecked, some types of traffic could potentially choke out other traffic types. Some of the traffic types that are commonly present in virtualized environments include:

  • Client traffic (end user access);
  • Cluster communication traffic;
  • VM replication traffic;
  • Live migration traffic;
  • Storage traffic; and
  • Out-of-band management traffic.

In all fairness, many organizations use Fibre Channel for storage traffic and some servers have a dedicated port for out-of-band management traffic. Even so, you wouldn't want simultaneous live migrations to bring user traffic to a snail's pace. Conversely, you wouldn't want user traffic to become so heavy that live migrations become impossible. If you are going to route all traffic through a NIC team, then it is important to use quality of service (QoS) to manage bandwidth use.

Step three: Maintain consistency within a cluster

One last issue that you need to be aware of is that hosts in a hypervisor cluster are generally required to adhere to a similar configuration. For example, if you build a cluster of Hyper-V servers, each node in the cluster must have identical virtual switches. Likewise, if your cluster is making use of a cluster shared volume, then each node must have identical connectivity to that volume. The point is that in a clustered environment, each of the cluster nodes should be equipped with an identically-configured NIC team.

Server NIC teaming can be tremendously beneficial to virtualization hosts. Even so, there are certain guidelines that must be adhered to in order to keep from introducing problems into the virtualization infrastructure.

Next Steps

Configure NIC teaming in Windows Server

NIC teaming can keep VDI in balance

Windows Server 2012 builds in NIC teaming

Dig Deeper on IT systems management and monitoring

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close