carloscastilla - Fotolia
GPUs provide systems with tremendous performance boosts for applications that can be parallelized to take advantage of the hundreds of small, linked cores of the typical platform. Well known use cases include graphics processing in all its forms -- video editing rendering and so on -- scientific computing and data stream processing, which includes big data analytics and high-speed communications/storage data services.
Many of these use cases are being aimed at cloud computing, but typical performance requirements and code complexity make this a nontrivial job and it lags somewhat behind general computing in the cloud. A key enabler in transitioning GPU-focused use cases was the release of the NVIDIA GRID product two years ago, which allowed applications for NVIDIA GPUs to be run on virtual GPUs created by software that orchestrated the resources of a real GPU.
This capability resulted in cloudward moves by Adobe and NVIDIA themselves. Adobe delivered a cloud version of their video editing suite and began deprecating the licensed free-standing version. The typical editing configuration went from an expensive workstation with expensive software licenses to become a tablet with a virtual workstation running in Adobe's cloud. The resulting drop in cost of entry to editing exploded the market with many more users and Adobe has flourished as a result.
NVIDIA offered a GPU cloud that allowed programmers to get acquainted with the approach in a sandbox type of environment. Subsequently, Amazon Web Services and others have offered GPU instances on their public clouds and the virtual GPU is now a mainstream platform. It's worth mentioning that high-performance computing has been slower to accept the cloud, but the impact of virtualization has reduced the price of entry there so much that research projects locked out of supercomputers can share on a cloud basis and access new resources. Estimates talk to a 50% or more schedule reduction for many projects.
Early in 2016, Advanced Micro Devices (AMD) entered the virtual GPU race. They took a hardware, rather than software, approach to handling multi-tenancy issues which offers, at least in theory, security at the same level as the CPU gives to multi-tenant VMs. AMD claims a cost/performance edge over NVIDIA, but this has in the past been a game of leap-frog and clever benchmarking by the two companies.
AMD's offering opens the door for Open Computing Language to join CUDA as an application language base for parallel computing in the cloud, so we should see some fierce competition and erosion of prices in the near future.
The hot use case for GPU virtualization today looks to be virtual desktop infrastructure (VDI). Rather than buying -- and supporting -- vast numbers of desktop PCs, we are seeing a tipping point in enterprises moving to mobile devices that use a browser to display the results of cloud-server-based virtual desktops. This is akin to the Adobe product and the consequent costs and support need are significant. The resulting virtual desktop should soon be able to take advantage of containers to improve server utilization.
Right behind the VDI use case is the growth in big data analytics processing, which these virtual GPU instances will service. Hadoop class problems fit GPUs really well, so both public and hybrid clouds will need to avail themselves of virtual GPUs.
Research work into using GPUs to realize storage and networking data services such as compression, deduplication and erasure coding are getting a new urgency, as we begin to realize that solid-state drives with transfer rates of 10 gigabytes per second are less than one year away, while 200 gigabit Ethernet may debut in 2018. CPUs can't keep up and hardware accelerators aren't flexible enough. Using a virtual GPU to execute a variety of microservices for storage and networking likely makes a good deal of sense and could be a way to resolve one dilemma of software-defined infrastructure.
Virtual GPUs are still a work in progress and we can expect a lot of new features and performance boosts in the next two years. AMD currently is supported by only one hypervisor, ESX, while NVIDIA also has XenServer. Intel hovers in the background with a product somewhat similar to NVIDIA, which supports KVM and XenServer. In the fine detail, there are differences in the way different classes or resources are distributed. We can expect convergence on what is supported, though hardware versus software as the control for multi-tenancy looms as a differentiator in favor of AMD.
The coming internet of things explosion and consequent growth in processing unstructured data will make GPU instances much more important in mainstream computing, while the data streaming acceleration could possibly make the approach ubiquitous on cloud servers. Whatever happens, this is an important facet in planning out a hybrid cloud.
Get ready for more vGPU
Figure out if you need virtual GPU technology
Test your GPU virtualization knowledge