In this tip, you'll learn about the networking issues that virtualization creates such as higher NIC density, increased network traffic and communication problems between the physical and virtual switch.
Server virtualization is blurring and shifting the roles of the data center server team and the networking team, because server hardware virtualization products bring the network into the server. This is not a turnkey data center design or painless IT staff transition.
In this article, I'd like to take a closer look at the impact of server hardware virtualization on networking in the data center.
Server hardware virtualization, such as that provided by VMware with VMware Infrastructure 3 (VI3), by Citrix with XenServer, or by Microsoft with Microsoft Virtual Server (and eventually Hyper-V), can make a tremendous difference in data center design.
Networking is certainly one area where the effects of virtualization are being felt.
Higher NIC density
First, in a heavily virtualized environment each physical server will typically have a much higher NIC density. It's not uncommon for virtualization hosts to have eight, ten, or twelve network interface cards (NICs), whereas non-virtualized servers typically have only two, maybe three, NICs. This becomes an issue in data centers where edge/distribution switches are placed in the racks, typically to simplify network cabling, and then uplinked to the network core. In this kind of situation, a typical 48-port switch would only be able to handle four virtualization hosts with ten NICs each. More edge/distribution switches will be needed in order to fully populate the rack.
In addition, most network architects don't have a problem oversubscribing these in-rack edge/distribution switches because most servers don't fully utilize their network connections. It is this under-utilization of networking resources, along with other resources, which draws many organizations into virtualization as they seek to consolidate workloads and conserve energy, rack space, cooling, or physical servers. However, in a virtualized environment, when multiple workloads have been consolidated onto these hosts, network traffic is increased according to the number of workloads running on that host. Network utilization will no longer be as low for each physical server as it was in the past.
Increased network traffic
It is likely necessary to increase the number of uplinks from the edge/distribution switches to the network core in order to accommodate the increased network traffic from the consolidated workloads.
Network design problems
Another key change comes from the dynamic nature of the latest generation of virtualization products, which have such features as live migration and multi-host dynamic resource management. Dynamic change capabilities inherent in virtualization mean that network architects can no longer make any assumptions about traffic flows between servers.
When the workloads were tied to the physical hardware, servers could be collocated in a rack or on the same switch when it was known that they would be exchanging lots of network traffic. This idea is known as locality. Now that workloads may dynamically move from one physical host to an entirely different physical host, locality can no longer be used in network designs. Network designs must now accommodate dynamic data flows that may be initiated from any virtualization host to any other virtualization host or physical workload. Instead of the traditional core/edge design, data center networks may need to look more like a full mesh, or a "fabric," that can fully accommodate traffic flows to or from any virtualization host to or from any other virtualization host.
It's interesting to note that the rise of vendors such as Xsigo and 3Leaf, with their I/O virtualization products, validates this need to accommodate dynamic traffic flows not only with regard to networking but also in regard to storage/SAN traffic.
Physical and virtual switch communication
Third, virtualization has taken away some visibility at the networking layer in the data center. It's only with the latest release of their flagship virtualization product, ESX Server 3.5, that VMware has made it possible for physical network switches to be able to communicate with the virtual switches, or vSwitches, via a protocol like Cisco Discovery Protocol (CDP). Other vendors don't have this functionality. Without it, network engineers have no visibility into the vSwitch, nor any way to easily determine which physical NICs correspond to which vSwitch. This is information that is often crucial in troubleshooting.
This lack of visibility also affects the ability to catch and possibly block malicious network traffic via traditional network intrusion detection systems (NIDS) or network intrusion prevention systems (NIPS). While some vendors have risen to stand in the gap with virtual appliances that offer this functionality, these solutions rely on promiscuous NICs and can't leverage switch functionality such as a span port--vSwitches don't offer a span port to which traffic can be directed. vSwitches also don't offer configurable SNMP support so as to be able to participate in network management systems. The network operations group has to rely upon the server operations group in order to help determine which ports on a vSwitch may be down, which NICs are affected, etc.
It is this blurring of responsibilities that is perhaps one of the least visible impacts of virtualization. With server hardware virtualization products bringing the network into the server, the roles and responsibilities of the server team and the network team begin to blur and shift. Similar shifts are occurring with the server team and the security and storage teams as well, as virtualization blurs the boundaries in those areas as well.
Scott Lowe is a senior engineer for ePlus Technology, Inc. He has a broad range of experience, specializing in enterprise technologies such as storage area networks, server virtualization, directory services, and interoperability.