Heavily virtualized environments may lead to bottleneck issues that can undermine virtual machine performance. So network solution vendors like Neterion Technologies, Mellanox Technologies Ltd. and SMC Networks Inc. have developed products to bypass the problem. But one expert says there is a better way to avoid bottlenecks than buying new equipment.
In heavily virtualized environments, data centers may run out of capacity resources and experience performance bottlenecks when deploying virtualization because of factors like virtual machine (VM) sprawl, increased storage, CPU, memory and network utilization.
"The major issue … is saturating or overload[ing] physical host server resources," reported Anil Desai, an independent virtualization consultant based in Austin, Texas. "You need to plan for the aggregate load of each VM, add the load placed by any applications or services running directly on the host, and then include a virtualization 'overhead' factor. For example, if you have a single gigabit Ethernet [GbE] physical connection on a server and you place a bunch of network-intensive VMs on that machine, the physical NIC [network interface card] might quickly become a bottleneck."NIC vendors converge on virtualization
Irvine, Calif.-based SMC Networks has developed 10 GbE network interface cards for Citrix Systems Inc.'s XenServer and VMware Inc.'s virtualization technologies.
SMC's product accelerates the data transfer between the guest OS and the outside world via a shortcut, according to Iain Kenney, the director of product marketing at SMC. SMC's NIC bypasses the hypervisor, enabling a direct connection from the guest OS to the network, instead of going from the guest OS through a hypervisor, to the NIC driver then through the network, Kenney explained.
"If you are trying to send a 700 MB piece of data this way, it is inefficient," Kenny said. "This is a more advanced network driver that with virtualized OSes we have special hooks in place that accelerate the transmission of data.
Santa Clara, Calif-based interconnect product supplier Mellanox Technologies announced in February its ConnectX EN 10 GbE NIC adapters for VMware- and Citrix XenServer-based virtual environments.
Adapters from Mellanox maintain 9.6 Gbps throughput as the number of VMs in VMware ESX Server 3.5 scales up to 16 in multicore CPU environments, according to Mellanox. This improves server utilization as more VMs can be deployed per physical server while maintaining application I/O performance.
Cupertino, Calif.-based Neterion rolled out its 10 GbE adapter for virtualization, the x3100, on February 25. The company has shopped it around to OEMs and expects it to be embedded into systems later this year, said Ravi Chalaka, the vice president of marketing at Neterion.
Neterion's x3100 product is designed with 17 channels in the NIC for a 16-core system, plus one extra channel. Each channel can be assigned its own virtual machine or application. The benefits of having an assigned channel is that a VM or application is not affected by any other applications or virtual machines, said Chalaka.
"The I/O channel requirements have increased because we have multicore processors, and we are running more and more VMs on physical servers," said Chalaka.
Experts say 10 GbE NIC 'not worth it' for virtualization
Despite the 10 GbE push from vendors this year, virtualization expert Andrew Kutz said networking bottlenecks are an issue only for data centers that are virtualizing "heavy" applications.
"For the last few years, we have been virtualizing the easy stuff: print servers, Web servers, small one-offs. However, now that people are starting to virtualize Exchange, SQL, Oracle, etc., networking may very well become a bottleneck," he said.
As for investing in 10 GbE, Kutz said it is not worth it -- yet -- and suggests simply segregating VMs instead.
"The cards are still too expensive, and you would have to upgrade switches and other networking infrastructure to support 10 GbE as well," said Kutz. "The best way to improve network connectivity is to segregate your VMs so that the network-intensive ones are on isolated links or the big boys are grouped into small clusters so their traffic does not impact others. Additionally, you can implement QoS [quality of service] on the virtual switch/port group to throttle a VM's bandwidth."
Let us know what you think about the story; email Bridget Botelho, News Writer.
Also, check out our Server Virtualization blog..