Server virtualization has matured to the point that most workloads can now be run in a virtual machine. Needless to say, some workloads are a bit more challenging to run in a VM than others. Graphically-intensive applications, for example, have always presented a challenge. The reason for this is that under normal circumstances, graphical processing within a VM is handled by the host server's CPU. This isn't a big deal for most workloads, but there are some workloads that can definitely benefit from hardware-accelerated graphics.
In Hyper-V, this type of hardware acceleration can be achieved through the use of RemoteFX and a virtual graphical processing unit (vGPU). The vGPU hands graphical processing off to a physical GPU within the host server rather than using the host's CPU for graphical processing.
Before you enable Hyper-V GPU offloading, there are two important things that you need to know. First, you should not enable Hyper-V GPU offloading for the majority of your VMs. Most VMs do not receive a significant benefit from GPU offloading. It's best to save the available GPU resources for use with VMs that have the greatest need for GPU offloading rather than wasting GPU resources on VMs that do not need a hardware GPU.
You also need to know that GPU offloading is based on RemoteFX, and RemoteFX has a dependency on the Remote Desktop Protocol (RDP) client. In a way, this makes perfect sense. Graphically-intensive workloads running on a VM would typically be accessed through an RDP client, so it stands to reason that the RDP client would play a role in the rendering process.
RemoteFX and Hyper-V GPU offloading are supported by RDP version 7.1 or higher. Version 7.1 is included with Windows 7 SP1. Newer versions of Windows include more recent versions of the RDP, and these versions should fully support RemoteFX and the use of a vGPU. Windows 8.1, for instance, includes RDP version 8.1.
If you want to make a host server's GPU available for use by your VM, then you must begin the process by making Hyper-V aware of the GPU's existence. To do so, open the Hyper-V Manager, right click on your Hyper-V host server, then choose the Hyper-V Settings command from the shortcut menu. When you do, Windows will display the Hyper-V Settings dialog box for the selected host. Click on the Physical GPUs container, and then select your preferred GPU from the GPU drop down list. Finally, select the Use this GPU with RemoteFX checkbox.
As you might have guessed, Hyper-V requires you to select a GPU from a drop down list. You can choose to enable multiple GPUs, but only if all of the GPUs selected within a host are identical. This brings up another important point. If you plan to enable GPU offloading for a VM, you will need to consider how this will affect your live migration or failover needs. Any host server to which the VM could be live migrated, or to which the VM could fail over, must be equipped with the same GPU hardware as the VM's current host.
Adding a vGPU to an individual VM is a relatively simple process. From within the Hyper-V Manager, right click on the VM and choose the Settings command from the shortcut menu. When the settings dialog box appears, click on Add hardware, and then select your video adapter and click Add. The dialog box's list of hardware will be updated to include a listing for the chosen video adapter. If you select the video adapter from the hardware list, you will have the option to specify the maximum number of monitors, set the maximum display resolution, or remove the video adapter from the VM.
As you can see, linking a physical GPU to a Hyper-V VM is a relatively straightforward process. Keep in mind however, that Hyper-V GPU offloading should only be used on VMs that are running graphically-intensive workloads and take special care when considering live migrations or failover plans.
How GPU virtualization options differ
When you should consider using RemoteFX
How a vGPU can offload intense processing demands