Back when 1 Gbps Ethernet (GbE) was the only option for a virtualization network, administrators needed large quantities...
of network interface cards (NICs) to support their organizations’ needs.
But having more than a handful of 1-GbE NICs makes hardware management and cabling pretty complicated. Instead, with just a pair of 10-GbE NICs, you can aggregate storage, production networking, management and migration traffic.
Virtualization network management with 1-GbE NICs
In the early days of virtualization, each NIC had a distinct responsibility for virtualization networking. A single NIC (or sometimes two, in case of failure) was recommended for virtualization network management activities. Over a special high-security network, this NIC would connect physical hosts to their hypervisor’s management platform. Operating on that network meant separating out virtual machine (VM) functions (power on, power off, snapshots and so on) from the regular functions of each server used by clients.
When 1-GbE NICs were the only option, a completely separate NIC was recommended for migration traffic. But performing multiple VM migrations -- with Microsoft Hyper-V Live Migration or VMware vMotion, for instance -- could consume so much throughput that it would interfere with the management client. Some migration traffic also transferred in clear text, which would create a security problem if the virtualization network was not appropriately locked down.
With this setup, multiple NICs were required for production networking and storage connections. Admins had to separate storage connections from production networking. Even with multiply-bonded connections, the virtualization network throughput requirements between servers and storage would sometimes consume all the bandwidth available. That was not good for keeping production networking available to clients.
So in the 1 GbE days, virtualization networking required at least six NICs, with a server connecting to iSCSI for shared storage. That’s a lot of hardware and quite an administrative challenge to keep your cabling connections straight.
Virtualization networking with 10-GbE NICs
Luckily, 10-Gbps Ethernet NICs arrived on the scene just in time to ease virtualization network management. In fact, 10 GbE is a common component in many of the converged infrastructure offerings you’re seeing from hardware manufacturers today.
Why? Simply put, 10-GbE NICs offer an order of magnitude more throughput than their predecessors. That order of magnitude reduces the 50-80% of the total network that a 1 Gbps NIC storage connection might use, for instance, down to a more manageable 5-8%.
Bringing utilization down from 80% to 8%, for example, means that storage, production networking, management and migration traffic could potentially be aggregated over a single pair of load-balanced 10 Gbps Ethernet NICs. The mileage for your virtualization network will vary, of course, and it’s important to measure your actual load before shifting to this design. That said, the reduction in sheer cabling requirements is a huge advantage for virtualization network management if you embrace the 10 GbE standard.
Converged infrastructure proponents highlight ease of installation and modular hardware as two of its selling points. Using modular hardware, adding more resources to your virtual infrastructure requires little more than snapping them into place.
And with the cabling reduction that 10 GbE brings, the task of adding resources becomes even more friendly to IT generalists. In the not-too-distant future, the entry-level individuals who today build desktops will tend hardware that’s simple to install, remove and manage. That possibility frees experienced IT pros to move out of the server room and focus on more value-added tasks.
How switching to 10 GbE affects personnel
Obviously, moving to the 10 Gbps Ethernet standard requires some procedural changes. Some tasks typically handled by the network team will now come under the server team’s responsibility.
Aggregating traffic on a pair of connections requires trunking a series of virtual LANs (VLANs) into that connection. But managing this trunking process -- and the resulting access connections to virtual switches and VMs -- isn’t commonly part of a server administrator’s job. Server admins must learn the purpose of each VLAN to avoid mistakes when assigning VMs, such as accidentally placing a VM on a DMZ LAN instead of an internal LAN. Cooperation between the network and server team is critical to ensure that virtualization network settings are configured correctly.
Some data centers will never get past the distrust between network and server administrators. Others may be locked into that division of responsibility because of certain requirements. But there are ways around this obstacle.
One example is Cisco Systems’ Nexus 1000V virtual switch, which is available for the VMware vSphere platform. This switch pulls Cisco functionality into the virtual infrastructure, returning virtualization network management responsibilities to the network admins. It gives them the instrumentation they require, and it enables complex routing, security and other features that aren’t natively available in VMware’s hypervisor.
Organizations that want to preserve the separation of network and server admin responsibilities should look into these types of tools for virtualization network management. And, if just for cable simplification alone, every virtual infrastructure should replace 1-GbE NICs with 10-GbE NICs, which are now much less expensive than they once were. Your data center manager will appreciate having far fewer cables lying around.