Successful virtualization installations involve substantial planning in several categories -- especially networking. In the first installment of this two-part tip, I focus on the current server virtualization hardware configurations for network virtualization infrastructures and their costs.
Server virtualization hardware and networking: past to present
When server consolidation-based virtualization was just getting under way, administrators frequently struggled to address network connectivity. The number of ports for copper networking was driven higher compared with the number for physical servers (physical servers hosted only one operating system).
With VMware installations, for example, it a best practice was to separate role-based network connections on physical media. This meant that the service console (or ESX operating system), vmkernel interface (or VMotion interface) and virtual machine (VM) network traffic (or vSwitches and port groups) resided on separate interfaces. Furthermore, it was a better design principle for each connection to have multiple interfaces for redundancy purposes.
Today, on the other hand, server virtualization hardware has adapted to the needs of virtualized data centers. The most visible change is that many virtualized host hardware now have four built-in Gigabit Ethernet (GbE) ports, such as the popular HP ProLiant DL 380 G6, Dell PowerEdge R900 and Dell PowerEdge R710, among others. Four built-in interfaces is one of the most beneficial installation improvements.
While it's a best practice to separate physical interfaces, it is possible to stack roles on adapters --including those on different virtual local area networks to deliver the required connectivity -- at no additional Ethernet cost.
Network virtualization infrastructure costs
In many organizations, cost is the top priority. One way to determine which network virtualization infrastructure is best is to calculate cost per port.
For small- and medium-sized installations, an Ethernet-based storage protocol -- such as Network File System (NFS) or iSCSI -- can be attractive. In the Gigabit Ethernet world, architecting a NFS or iSCSI virtualized storage protocol can drive the cost per port model down. Using interfaces that are built into servers or added at a nominal cost compared with Fibre Channel interfaces reduces the cost per port on virtualized servers substantially.
Also, from a cost perspective, switching equipment for GbE is more attractive than Fibre Channel-based switching. With VMware installations, for instance, if you use an Ethernet-based storage protocol, I recommend deploying an iSCSI storage protocol so that the vStorage Virtual Machine File System can be used.
About the author:
Rick Vanover (email@example.com), VCP, MCITP, MCTS, MCSA, is an IT Infrastructure Manager for Alliance Data in Columbus, Ohio. He is an IT veteran specializing in virtualization, server hardware, operating system support and technology management.