How to choose the best hardware for virtualization
A comprehensive collection of articles, videos and more, hand-picked by our editors
As computing needs in the modern data center change, high-performance GPUs have become an important part of server...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
virtualization. Server workloads such as big data analytics employ more powerful data visualization and rendering capabilities to express complex data. Servers also need the graphics capabilities to handle endpoint-type tasks that are now virtualized.
Until recently, server vendors largely overlooked graphics, because such rendering and visualization features were not needed by traditional server workloads such as transactional databases or file and Active Directory servers. System designers opted to forego a graphics processing unit (GPU), lowering the server's cost and minimizing the system's energy demands.
But the push to virtualization, along with increased reliance on multimedia and visualization tools, has prompted businesses to reconsider the need for server-based graphics hardware. As server technology continues to evolve, vendors have begun to offer servers with GPUs directly incorporated onto the hardware.
Before you deploy high-performance GPUs, however, be sure to plan and test in advance because servers don't always provide the same slot space and power cabling as desktop PCs and workstations.
The role played by high-performance GPUs
A GPU serves the same role on a server that it does on a client-side computer: The GPU offloads an application's graphics instructions from the main processor. This process frees up the main processor for other tasks and executes the application's graphics instructions in hardware, providing for the high level of sophisticated, life-like rendering, video processing and visualization we expect today. Without a GPU, graphics instructions would need software emulation and would tie up the main processor, yielding unacceptable levels of performance.
Application virtualization, for example, might allow a server to host an application shared by multiple users. If that shared application demands graphics functionality, such as a video-rendering tool, then the server must provide that capability. Additionally, virtual desktop infrastructure (VDI) allows endpoints to be hosted on centralized servers. In this case, 3-D modeling software and other graphics tools that might normally run in a desktop PC would now run in a virtual machine hosted on a server, which also requires the addition of graphics functionality.
Installing GPUs on a virtualized server
High-performance GPUs are typically deployed on traditional servers through a highly specialized graphics adapter card, such as NVIDIA's Tesla, installed in an available PCI Express (PCIe) slot at the server. This is the easiest and most common way to retrofit an existing server with no onboard GPU, but there are still challenges to consider.
These GPU cards are often large, power-hungry devices, but servers typically provide only one or two PCIe slots, one of which may already be filled by another PCIe expansion device, such as a multiport NIC or an I/O accelerator. Even if a suitable expansion slot is free, a GPU card, complete with a large heat sink and fan assembly, simply might not fit in the space available.
Read more about high-performance GPUs
Graphics delivery for virtual desktop infrastructure
Fixes for Windows video driver errors
Getting to know RemoteFX
You should also keep in mind that a GPU card can require several hundred additional watts of system power. This requirement can cause problems for server platforms designed with small power supply modules for high efficiency and minimum power use. Some systems may need to upgrade power supplies and provide additional power cables to accommodate the GPU card. The PCIe bus cannot deliver that much power to a load device.
Ultimately, adding an after-market GPU card should always be approached as a proof-of-principle project. IT professionals will need to evaluate GPU card deployment techniques very carefully and verify the server's ability to support a GPU load across a range of operating conditions.
New server designs can, however, incorporate GPUs directly onto the server's motherboard. The SuperServer 1027GR-TRFT from Super Micro Computer Inc. incorporates a Matrox G200eW GPU on board, which gives you the advantage of integration simplicity. The GPU will not need a PCIe slot, and the power supplies will already be sized to run the additional GPU chips.
Software requirements for a server GPU
Graphics platforms are extremely demanding subsystems for any computer, both in terms of physical space and power supplies, but the GPU must also be compatible with the server's operating system.
NVIDIA's Tesla for servers currently only supports 32-bit and 64-bit Linux. Depending on its intended use, the GPU may require driver support from Windows Server 2012, as well as a hypervisor such as vSphere or Hyper-V. In short, there must be some mechanism for CPU cores to share GPUs. This is particularly important to VDI deployments where many desktop instances require graphics functionality.
For decades, server vendors have avoided the use of graphics capabilities, preferring to relegate high-performance rendering and visualization tasks to endpoint systems with their own individual graphics subsystems. As virtualization enables application and endpoint consolidation within the data center, graphics functionality must also shift to the server's hardware. IT professionals, however, will need to take great care to avoid storage or power bottlenecks and compatibility issues when retrofitting enterprise-class GPUs onto current servers.