.shock - Fotolia

Get started Bring yourself up to speed with our introductory content.

How VDI hardware requirements differ from virtualization

VDI has specific hardware needs that servers hosting other virtualized workloads may not meet.

A virtual desktop infrastructure allows a data center server to deliver complete desktop instances to a variety of devices including conventional PCs, thin client and even zero client endpoints. But every VDI instance is processed and stored by the server, and even a relatively small number of instances can demand significant computing resources and network access. VDI deployments must start with a detailed consideration of server-side capabilities and an assessment of server hardware upgrade needs. This article examines some common server issues related to VDI hardware requirements.

Server requirements to support VDI

It's important to note that there is no single list of VDI hardware requirements. The issue is not a lack of support; VDI will operate on almost any current virtualized server. Rather, the number of VDI instances that may be deployed on a server is limited by that server's available computing resources.

As an example, a typical "white box" server for enterprise-class VDI deployment might include dual eight-core processors and at least 192 GB of fast DDR3 memory. In terms of storage, it is certainly possible to use centralized SAN storage for VDI instances. But in order to avoid storage and VDI traffic on the same LAN, a SAN should use a separate network (such as Fibre Channel or a physically separate LAN) or use local storage on each VDI server to load and protect VDI instances -- this means the VDI server will need physical space for perhaps 16 high-performance 10-15k RPM SAS 6 Gbps hard drives (meaning a 2U or 3U rack unit).

Larger and more powerful servers can support more VDI instances on the same box, while older or less-capable servers will support fewer instances. A server like the example above might be expected to host anywhere from 80 to 130 instances, though the exact number of VDI instances on any server depends on other details like the size and complexity of the base image, the level of personalization, the number of virtualized applications, user and application activity across the LAN and so on.

This may seem like a lot of instances, but consider that an enterprise large enough to justify a VDI initiative may employ 1,000 people or more -- this means at least 10 such servers would be required for the deployment, along with additional servers to support growth and failover. An enterprise with 5,000 users would need roughly 50 such physical servers with the added costs of hypervisor and VDI platform licensing.

Graphics co-processing support for a VDI server

VDI works by handling all of the processing tasks within the server, and using the endpoint device only as an I/O platform (e.g., video, mouse and keyboard). So all of the desktop and visual rendering work takes place within the host server's processor, and the resulting images are relayed to the endpoint across the LAN. This is often adequate for rendering basic Windows-type desktop dialogs and other elements, but advanced graphics tasks (like streaming video or 3-D graphics) can pose a major processing problem.

The issue is hardware support. Servers often omit graphics processing units (GPUs) because traditional server-side tasks like file servers or Active Directory servers do not use graphics. But when graphics instructions (such as SSE3 instructions) must be processed, there is no GPU available to offload the burden -- leaving the CPU to grind those instructions with inefficient software emulation. The result is a significant performance penalty that can impact every VDI instance on the affected CPU core. As VDI use matures and embraces more sophisticated visualization applications, it's important for VDI servers to include GPU support as a boost to system performance.

GPUs are always deployed as a separate device, but the device can be integrated in several different ways. The most common approach is to install a GPU as an expansion device such as PCIe adapter card. Everyday desktop PCs routinely use this approach because PCIe slots are plentiful and readily accessible, and servers can use powerful server-class products like NVIDIA's Kepler-based GRID K1 and K2 adapters. However, servers may not provide enough PCIe slots to accommodate GPU adapters which are usually quite large and sport several cooling fans. Limited PCIe slots may also be utilized with other expansion devices like multiport network adapters or storage accelerators.

An alternative is to use an out-of-box GPU like the Cubix GPU-Xpander which uses a simple, low-profile PCIe adapter that simply connects an external, independently-powered self-standing GPU system. This approach avoids overtaxing the server's limited power supply and space constraints with the PCIe slots.

A third emerging approach is to integrate the GPU directly into the processor package, so every CPU socket has access to its own GPU. As an example, Intel adds a GPU to the Xeon E3 family, and plans transcode performance improvements to boost graphics performance. RISC processors based on ARM architectures are also adding GPUs to handle graphics tasks. Integrated GPUs are probably the most efficient solution because they do not overwhelm the server's power supply and do not use a PCIe slot, but IT planners may need to await a future technology refresh to acquire servers with CPU/GPU integration.

VDI server appliances

There are server systems commercially available built to meet VDI hardware requirements, though these should be considered more along the lines of pre-configured "packages" than specially-designed systems. One example is Dell's DVS Simplified Appliance. The Desktop Virtualization Solutions (DVS) package is based on Dell's standard PowerEdge R720 or T620 servers bundled with Citrix XenServer or Microsoft Hyper-V and VDI management tools. The appliance is reported to host up to 129 users on each appliance, and additional appliances can easily be deployed to support more users.

Other VDI appliances are also available, including VMware's Rapid Desktop Appliance based on VMware Horizon View, the Vertex VDI appliances from Tangent and the vSTAC VDI appliance from Pivot3, among others.

Since packages like the DVS rely on standard servers, there is no custom or specialized circuitry to differentiate the "appliance" from a conventional server. Features like N+1 redundancy, automatic failover, load balancing, desktop provisioning and desktop image management are all handled through software tools.

VDI instance support is directly related to computing resources, but VDI hardware requirements

vary depending on the complexity of desktop images and layered features like personalization and application virtualization. All of these factors make it extremely challenging to determine the exact amount of resources needed for every desktop instance -- and the total number of instances that a given server will support. All of this underscores the need for extensive system testing in well-planned proof-of-principle projects and limited deployments (such as select workgroups or departments) prior to general deployment across an enterprise.

Next Steps

VDI hardware requirements and checklist

Choosing VDI storage hardware and features

How hyper-converged infrastructure lets VDI scale

This was last published in April 2015

Dig Deeper on Server hardware and virtualization

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you evaluate and choose server hardware for VDI?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close