Enable Discrete Device Assignment on Windows Server 2016 TP5

By June 4, 2016January 26th, 2021No Comments

Article by Benny Tritsch and Kris Griffin

In Windows Server 2016, GPUs for Remote Desktop Session Host will only be available through Discrete Device Assignment (DDA), but not RemoteFX. The good thing about DDA is that it can also be used for Windows 10 VMs. While RemoteFX is based on GPU sharing and API intercept, DDA is an implementation of GPU pass-through. This means that the physical GPU is passed through directly and exclusively to the RDSH or the Windows 10 VM. Instead of generic virtual device drivers it is possible to use the GPU manufacturer’s graphics driver in the VM. Unfortunately, there is no easy way to configure DDA in Server Manager or Hyper-V Manager. But here is a step-by-step guide on how to do this.

Before you can start using DDA, you need to be sure that you have the right host hardware and the right GPU. There is a great PowerShell script on GitHub developed by Ben Armstrong that checks DDA capabilities. Here is the download path: Please be aware that only modern servers with the latest chipset, BIOS and firmware versions are DDA-compatible. The same is true for GPUs, not every graphics card can be used for DDA. In essence, hardware compatibility is still a limiting factor for this technology.

When you decided to configure DDA on a host system, you’ll need to create a Hyper-V virtual machine (generation 2) and install Windows 10 Version 1511 or Windows Server 2016 TP5 as the guest operating system. To enable Discrete Device Assignment and make a GPU available exclusively to the virtual machine, you need to open Windows PowerShell as an administrator on the host system. The following steps configure DDA for the VM, which is named “GpuVM” in our example.

  • Use the Get-PnpDevice command with a search condition to narrow down the PnpDdevice class you want to search for. $pnpdevs = Get-PnpDevice | Where-Object {$_.Present -eq $true} | Where-Object {$_.Class -eq “Display”}. Take a look at the result by simply typing $pnpdevs in the command window. Find the GPU you want to assign to the VM. Please note that the array is zero-based, where the first index is position 0.
  • Disable the GPU graphics device on the host system, using the Disable-PnpDevice command in the PowerShell command window. Disable-PnpDevice -InstanceId $pnpdevs[fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][1].InstanceId -Confirm:$false. When using the $pnpdevs = Get-PnpDevice | Where-Object {$_.Present -eq $true} | Where-Object {$_.Class -eq “Display”} command followed by the $pnpdevs command again, the output should show an error now as the device is disabled.
  • Check Device Manager from the host system again to confirm that the GPU is disabled on the host system.
  • Dismount the device from the host system by first obtaining the PCI location of the physical device. The Get-PnpDeviceProperty command retrieves the PCI location path of the pass-through device. $locationpath = ($pnpdevs[1] | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]. Check the PCI location by typing $locationpath.
  • The Dismount-VmHostAssignableDevice command will then dismount the physical device so that it’s no longer accessible on the Parent Partition. Dismount-VmHostAssignableDevice -LocationPath $locationpath -force.
  • After the dismount command is executed the GPU graphics device is no longer listed under the Display device class type. When you open  the Device Manager on the host system again, the GPU is no longer listed under Display adapters. Instead it is listed under System devices as PCI Express Graphics Processing Unit – Dismounted. Even though the device is dismounted on the host, the device is still enabled and therefore the device’s I/O resources will remain allocated to the physical device on the host system.
  • Change the automatic stop action of the host to turn off the VM. In the properties for the VM, go to Automatic Stop Action and change the setting to “Turn off virtual machine”.
  • Issue the Add-VMAssignableDevice command on the host system to enable Discrete Device Assignment. The variable $locationpath comes from the previous commands. In this example “GpuVM” is the name of the virtual machine. Add-VMAssignableDevice -locationpath $locationpath -VMname GpuVM.
  • If everything went well, the GPU is now available and accessible exclusively to the VM. Open Device Manager in the VM where the new device is listed under Display Adapters now.
  • Install the device driver for the GPU using the same driver as used on the host. The GPU will then be properly recognized by Device Manager in the VM.

In case the operating system installed in the VM is Windows Server 2012 R2 or Windows Server 2016, an additional step is required. Open Gpedit.exe and change the setting in Computer Configuration – Administrator Template – Windows Components – Remote Desktop Services – Remote Desktop Session Host – Remote Session Environment to “Use the hardware default graphics adapter for all Remote Desktop Services sessions“. Configure the policy to be enabled.

You’re all set now and can run graphics applications accelerated by the physical GPU in the virtual machine. This is not only beneficial for CAD/CAM applications, but also for most browsers, Microsoft Office and Adobe Acrobat.

In case you want to restore the GPU device to the host system, here are the steps to do so. First shut down the VM guest OS that s currently using the GPU graphics adapter. Open PowerShell as Administrator on the host system.

  • Find the device’s location path InstanceId from the host system using the Get-PnpDevice command. The device is dismounted on the host system and is categorized as System class, so the Get-PnpDevice command will filter on class System. $ppsrch = Get-PnpDevice | Where-Object {$_.Present -eq $true} | Where-Object {$_.Class -eq “System”}. To see the result type $ppsrch.
  • Look out for the dismounted PCI Express GPU. The variable $ppsrch is an array containing various “location paths” InstanceId names. The index is zero-based (that is, the first item is counted as 0). You will have to manually determine the index for the device in the System class by counting the number of entries on the screen. In our example the dismounted PCI Express GPU is at place number 10, with the first entry being index 0.
  • Use the Get-PnpDeviceProperty command to obtain the path location for the device. $locationpath=($ppsrch[10] | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]. Check the location by typing $locationpath.
  • Use the Remove-VMAssignableDevice command to remove the GPU based on its path location that we just assigned to variable $locationpath, using the following command. Remove-VMAssignableDevice -location $locationpath -vmname GpuVM.
  • You can now confirm that the GPU has been removed from the VM by checking the Device Manager in the VM.
  • On the host, mount the device again, using the Mount-VmHostAssignableDevice command. Mount-VmHostAssignableDevice -locationpath $locationpath.
  • Use the Get-PnpDevice command to search for the GPU device. $pnpdevs = Get-PnpDevice | Where-Object {$_.Present -eq $true} | Where-Object {$_.Class -eq “Display”}. Take a look at the result by simply typing $pnpdevs in the command window. Make sure you find the GPU device in the list.
  • The mount can be verified by checking Device Manager on the host. Once again, the device is visible in the Display Adapters section, even  though it is still disabled.
  • First you will need to find the right device with Get-PnpDevice. $pnpdevs = Get-PnpDevice | Where-Object {$_.Present -eq $true} | Where-Object {$_.Class -eq “Display”}. Check the result by typing $pnpdevs.
  • Enable the GPU device using the Enable-PnpDevice command. Similar to the Disable-PnpDevice command, the array index into the $pnpdevs variable will be 1, if the GPU device is the second entry in the list (a zero-based list). Enable-PnpDevice -InstanceID $pnpdevs[1].InstanceID -Confirm:$false.
  • The device is now restored as mounted and enabled on the host system, as shown by the entry in Device Manager. The device driver is automatically loaded.