How to implement GPUs for high-performance computing

Support the increasing prevalence of HPC workloads

High-performance computing isn't just for scientific researchers and engineers anymore. It's good for transaction processing too, and as data centers grow in size and complexity, the need for more powerful technology and better management practices increases.

GPUs can support demanding compute workloads, such as AI, machine learning and high-performance computing (HPC), much better than their older counterparts: CPUs. Using GPUs for high-performance computing and AI workloads makes sense for many use cases, but organizations that want to use GPUs for machine learning should consider cheaper alternatives if they have unpredictable workloads. Luckily, GPU repurposing -- or reallocating idle GPUs -- for tasks such as virtual desktop infrastructure sessions can help improve cost efficiency.

Along with a technology upgrade, IT managers should carefully consider how they configure and provision old and new virtual resources to support HPC workloads. Some management best practices include planning clusters around HPC workloads; taking server configuration into account; carefully selecting and configuring hypervisors; knowing how and when to implement GPUs; as well as properly configuring VM, storage and network resources.