Server virtualization typically focuses on abstracting and provisioning CPU or memory resources, but server I/O has traditionally remained a bottleneck which limited storage and network traffic -- ultimately limiting the maximum level of server consolidation. More recent developments in CPU and hypervisor technologies have enabled additional abstraction of the server's I/O subsystem, allowing the server to share I/O resources better and handle more workload I/O traffic than ever before. But hardware-assisted I/O virtualization isn't always automatic or foolproof.
What is I/O virtualization? How does it benefit a virtualized server or its workloads?
Virtualization is a software layer that abstracts a computing workload (an application) from the underlying computing hardware. The hypervisor converts the server's physical resources into virtual resources which can easily be provisioned or adjusted to meet each workload's optimum requirements; also maximizing the number of workloads supported by a virtual server. This works fine for CPU and memory resources.
However, server I/O has always presented bandwidth problems. For example, a server's single Gigabit Ethernet port can certainly support a single application, but may simply not be adequate when divided between 10, 15 or more server workloads including network, storage and server-to-server traffic. When I/O bottlenecks occur, computing efficiency falls as CPUs idle awaiting data -- the bottleneck essentially defeats the utilization efficiency of virtualization.
Extending virtualization to the I/O subsystem makes the most of available network interface ports by dynamically sharing bandwidth between workloads, storage and inter-server communication. By alleviating the potential bottlenecks of server I/O, the server can accommodate more workloads and improve workload performance.
Although the potential for increased consolidation or improved performance is an important benefit of I/O virtualization, IT professionals should consider other benefits of simplified management. For example, I/O virtualization makes I/O management easier. Just as virtualization makes CPU and memory provisioning easier, I/O virtualization eases network interface card (NIC) and host bus adapter (HBA) provisioning and improves their utilization on server hardware. Management changes take place in the hypervisor rather than the individual hardware devices, so less time is required to manage I/O activity. Improved I/O hardware utilization also lowers I/O hardware costs because fewer NIC or HBA devices are needed. And the improved use of I/O for multiple traffic types (e.g., application versus storage) allows more traffic to flow over fewer cables, which reduces network complexity.
What are the system or processor requirements for I/O virtualization? How is it enabled?
Generally speaking, I/O virtualization will require hardware assistance from the local processors. This includes Intel VT processors which supplement baseline VT-x virtualization capabilities with VT-c and VT-d functionality. AMD processors provide similar functionality with baseline AMD-V virtualization along with AMD-Vi enabled chipset.
For example, VT-c technology supports virtual I/O connections at almost native link speeds using virtual machine device queues to offload I/O tasks to the NIC. VT-c also employs virtual machine device connects to allow VMs to access the network directly using single root I/O virtualization. VT-d technology in the supporting processor chipset handles device assignments and isolates workloads that share the I/O resources. All of these technologies act to reduce the processing overhead associated with hypervisors and virtual machine monitors. These capabilities are typically available in Intel Xeon 5500 server processors and later.
Although processor and chipset support is critical for server virtualization (I/O and otherwise), it is important to enable virtualization features in the server's BIOS. For example, Intel-based servers may offer numerous virtualization features that can be enabled or disabled independently through the BIOS. These BIOS features include a master switch such as "Enable Intel Virtualization Technology" and an array of sub-features like "Enable Intel VT-d" or "Enable AMD IOMMU" to enable chipset I/O virtualization support.
In most cases, BIOS enables I/O virtualization settings by default, but IT staff should inspect all of the server's virtualization settings to verify that the system is properly configured. Otherwise, the system hardware may not allow advanced virtualization (reducing the system's overall performance with virtualized workloads).
Are there any problems or drawbacks to I/O virtualization?
It's important to note that I/O virtualization is not a new concept or recent introduction -- hypervisors have always handled I/O virtualization in software. This latest push in I/O virtualization basically extends the capabilities of "virtualization acceleration" in processors and chipsets which further boosts system performance for I/O tasks. Consequently, there are no major problems or risks associated with I/O virtualization technology yet, but there are some noteworthy issues to consider.
First, hardware-assisted I/O acceleration is still reasonably new. Even when a suitable processor is installed and BIOS firmware is in place, the underlying chipsets or other design choices may prevent I/O virtualization from working. This is the principal reason why BIOS settings should be inspected and verified when working with I/O virtualization rather than simply assuming the features are available and enabled. It's entirely possible that some enterprise servers may already have I/O virtualization enabled while other servers may be unable to use it.
The issue here is that server performance may be slightly better when servers have I/O virtualization features enabled, and this may impact workload placement, migration and consolidation. For example, an I/O-intensive workload running well on an I/O virtualized server may see performance degradation when migrated to another server that does not offer hardware-assisted I/O virtualization. IT administrators should continue to monitor after migration or rebalancing to ensure continued performance and be ready to migrate the workload again to another I/O virtualized server if necessary.
Second, the actual benefit of hardware-assisted I/O virtualization is best experienced with I/O-intensive workloads. It's less beneficial for workloads with modest or light I/O needs. When introducing hardware-assisted I/O virtualization to the data center, IT staff should take the time to benchmark each workload's I/O performance before enabling I/O virtualization or migrating workloads onto the I/O virtualized system -- then re-benchmark the workload to get a quantitative comparison of the I/O performance benefit.
And finally, server I/O virtualization can improve I/O performance at the server level, but this has no effect on the greater network architecture -- improving I/O utilization at the server level might still cause saturation at a network switch or along a network backbone or a major segment (such as the SAN). As I/O virtualization is deployed through the data center, use network analytical tools to observe LAN performance to identify other potential network bottlenecks that arise outside of the server.
I/O traffic-intensive workloads can quickly overwhelm a virtualized server and actually defeat some of the consolidation benefits that virtualization offers. Hardware-assisted I/O virtualization employs a series of CPU and chipset enhancements to improve workload communication and traffic organization, improving I/O utilization and boosting workload traffic performance. But to reap the greatest benefits from I/O virtualization, IT staff must verify each server's hardware support, see that the features are actually enabled in BIOS and monitor workload performance for measurable changes.
Stephen J. Bigelow asks:
If any, what setbacks have you suffered from using server I/O virtualization?
0 ResponsesJoin the Discussion