E-Handbook: How to implement GPUs for high-performance computing Article 4 of 4

alex_aldo - Fotolia

Tip

GPU repurposing helps improve infrastructure cost efficiency

A GPU's cost efficiency decreases when left idle. But, by putting GPUs to work for virtual desktop infrastructure sessions, admins can reduce costs and ensure improved performance.

GPUs are dynamic and provide better performance than CPUs for some workloads. Organizations can use GPUs to accelerate workloads at night and then repurpose the GPUs during the day to support VDI.

The speed of an individual GPU core is only 1.6 GHz, but there are hundreds of cores per card and usually several cards per physical host, which makes them extremely efficient. Many GPU farms run overnight and often sit idle during the day, which significantly decreases their cost efficiency.

But IT administrators can put idle GPUs to work during the day, too. Admins can use GPU repurposing to support virtual desktops for employees whose work involves graphics-hungry applications, such as computer-aided design and computer-aided manufacturing software. The more often the GPUs are in use, the more cost-efficient they become.

For example, in the oil and gas industry, satellite maps can provide detailed topographies. These topographies are then fed into a cluster of GPU-enabled servers to locate and annotate potential areas to search for oil reserves. Dynamic configurations show that, once the satellite has identified reserves in an overnight process, the GPUs can generate detailed visual 3D maps in a portable video file. The satellite can then share the video without the need for GPUs to view it.

Once the GPUs process these overnight tasks, they're returned to the resource pool. At this point, companies can reconfigure the GPUs to run GPU-enabled VDI sessions. In enterprise IT, companies have traditionally used GPUs to make VDI sessions more responsive, but GPUs also enable companies to scale and perform trillions of calculations at once, which delivers data faster. The dynamic nature of GPUs enables companies to use the hardware to its fullest potential, thereby effectively managing infrastructure costs.

How to change from GPU farm to VDI support

Reassigning GPUs from data processing tasks to VDI consumption is a disruptive process and often requires an infrastructure reboot to change the configuration. But vendors such as Nvidia have created products to accommodate this switch without sacrificing VDI performance. Nvidia's GPUs enable admins to allocate or remove GPUs from one configuration, such as a high-performance GPU-enhanced server in the cluster, and give that GPU to virtual desktops. Admins can suspend, resume, script and automate compute jobs according to what they need, so long as the hardware supports hot add and hot remove of resources.

The more often the GPUs are in use, the more cost-efficient they become.

But not all admins have the means to afford large amounts of GPUs, especially if they don't have large data centers and demanding workloads. For admins whose workloads rely heavily on GPU use, they should purchase GPUs outright. In this case, admins should manually reconfigure the GPUs or use a third-party automation tool to make the switch from VDI to high-performance computing and vice versa.

But, if admins don't have the workloads to justify purchasing GPUs, they can rent GPU-enabled cloud instances by the minute and scale up or down as needed. This helps them manage the cost of what can otherwise become extremely expensive hardware. The newer the GPUs, the more they cost.

Because of this, the cloud has become a contender for many admins looking for a platform that increases GPU performance while ensuring cost efficiency. Cloud providers such as Google already sell GPU-enabled VMs.

Cloud platforms generally get the latest generation of GPUs well before the hardware is available to on-premises data centers. Organizations that need the latest performance can achieve it without the upfront cost of these GPUs. However, some forward-thinking GPU companies, such as Advanced Micro Devices, now rent out a physical cluster in custom data centers.

This fixes one of the big problems with GPU utilization: The data needs to be local to GPUs; otherwise, latency can slow performance and negate the advantages of GPUs.

But cloud rental won't work for every admin because of the thousands of use cases. For admins who want to own rather than rent their GPUs from the cloud, it's possible to buy massively parallel GPUs equipped with GPU cores. Generally, these come in the form of several cards that are connected by a high-speed proprietary bus.

Also, GPUs aren't all created equal. For example, VMware has a list of supported configurations, such as the Nvidia configurations.

One of the issues that admins new to using massively parallel GPU setups come across is that the applications don't always support GPU acceleration. Some applications can't be GPU-enhanced, but those that can are being rewritten for consumption by GPU engineers. Vendors can often provide a list of GPU-enabled applications so that admins can pick the application they require and receive details on how to obtain the optimized version.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close