Tutorial: Passing through GPU to Hyper-V guest VM

  1. Posts : 7
    Win10 Pro

    Tutorial: Passing through GPU to Hyper-V guest VM

    I've learned so much about Hyper-V from all of the tutorials by @Kari and @Brink; that I hope I can return the favor here. I've created an account just to post this.

    My original problem: In the guest VM, my audio was out of sync when watching videos on youtube and twitter. Initially I thought it was an audio problem with the VM. Then it dawned on me that its just as likely that the video is out of sync, rather than the audio. Maybe my VM didn't have video hardware acceleration or something...

    Initial searches showed that Hyper-V actually used to have this as an option within the GUI. It was called "RemoteFX Video Adapter" that you would add under the VM settings. However, due to a security issue, this feature was removed by Microsoft.

    Next, I found something called "Discrete Device Assignment" (DDA). This looked promising, until I found out that this feature was only available on Windows Server OS, while I am on a desktop OS with Win10 Professional 21H2. Further, this DDA method might exclusively assign the device to a VM, rather than sharing it with the host or with other VMs.

    GPU Partitioning

    Finally, somehow I stumbled upon something called "GPU Partitioning". That is what this post is about. It is a method to partition off resources from your graphics card so that it can be used inside your VMs. Unfortunately, there is almost no documentation for this from Microsoft. But I've compiled together this information from all my searches.

    You can enable this feature with a some powershell commands. But the difficult part is to get the video drivers working inside the Guest VM. You DO NOT install any video drivers in the VM. Instead, you must copy the existing drivers from the host machine into the same location in the VM. If you get a "Code 43" error in Device Manager inside the VM, its likely a driver issue. Luckily, some people have scripted this to save us the hassle.

    Requires an existing Generation 2 VM.


    1. Test to see if your GPU can be partitioned at all. On the Host, open a Powershell prompt as administrator. Then run:
      Get-VMPartitionableGpu (win10)
      Get-VMHostPartitionableGpu (win11)

    2. Open up Device Manager on the guest VM, and check the Display Adapters. You will see that your GPU is NOT enabled.
      Then shut down the VM.

    3. Go to this link: GitHub - jamesstringerparsec/Easy-GPU-PV: A Project dedicated to making GPU Partitioning on Windows easier!
      You can download the full repo if you want. But you only need these two files:
      • Add-VMGpuPartitionAdapterFiles.psm1
      • Update-VMGpuPartitionDriver.ps1

    4. From the admin powershell console, run this command:
      .\Update-VMGpuPartitionDriver.ps1 -VMName "Name of your VM" -GPUName "AUTO"
      Just edit that command with the name or your VM. GPU "AUTO" will automatically determine your GPU. These scripts will find all the driver files from your host machine, and copy the files to the VM. This can take some time.

    5. With the VM still off, create a new .ps1 powershell file on the host, and paste in this code:
      $vm = "Name of your VM"
      if (Get-VMGpuPartitionAdapter -VMName $vm -ErrorAction SilentlyContinue) {
         Remove-VMGpuPartitionAdapter -VMName $vm
      Set-VM -GuestControlledCacheTypes $true -VMName $vm
      Set-VM -LowMemoryMappedIoSpace 1Gb -VMName $vm
      Set-VM -HighMemoryMappedIoSpace 32Gb -VMName $vm
      Add-VMGpuPartitionAdapter -VMName $vm
      This script will enable the GPU partitioning for your VM, and turn on some required settings.

    6. Edit the first line and again put the name of your VM. Then run this script file in your powershell prompt by preceeding the filename with .\ just like you did with the previous script above.

    7. Now we should have the drivers copied into the VM, and the GPU partitioning feature enabled. You can now turn on the VM, and go back to Device Manager and see if your GPU is now shown under Display Adapters

    My solution above is to enable your GPU on an already existing VM. If you are willing to create a new VM from scratch, you could use all the files from the github repository in Step 3. However it also installs some extra software that you might not need.

    As far as I understand, everytime you update your graphics drivers on your host, you will similarly need to copy those new drivers into the guest VMs as well. We simply repeat Step 4 to do this.

    In the resource links below, you will see that there are other partitioning settings that you can play with, but they were unnecessary for me. Also I read that this GPU passthrough feature would require that I turn off both dynamic memory, and checkpoints. But this was not required for me. I left both enabled, and got no errors. If you get errors, try turning those off.

    And to bring it full circle, after doing this, my GPU is correctly shown in Device Manager, and my audio/video lag on youtube inside the VM is gone.

    GPU partitioning is finally possible in Hyper-V : sysadmin
    I made a Powershell script to automate the creation of GPU-P enabled Hyper V VMs - Servers and NAS - Linus Tech Tips
    GPU Virtualization with Hyper-V – James' Personal Site
    Running FiveM in a Hyper-V VM with full GPU performance for testing ("GPU Partitioning") - Cookbook - Cfx.re Community
      My Computer

  2. Posts : 17,616
    Windows 11 Pro

    I get all kinds of script errors in step #4.
      My Computer

  3. Posts : 17,645
    Windows 10 Pro

    NavyLCDR said:
    I get all kinds of script errors in step #4.
    Same here.
      My Computer

  4. Posts : 7
    Win10 Pro
    Thread Starter

    NavyLCDR said:
    I get all kinds of script errors in step #4.
    Kari said:
    Same here.
    Can you guys post the errors you're getting? I've tested this multiple times, maybe I can help. And what version of Windows are you on?
      My Computer

  5. Posts : 1,309
    Windows 11 Pro 64-bit

    Interesting experiment. I did try it in the past, manually, but with no success.

    Thanks for sharing.
    I finally got it running in a Win11 test. Just Intel Graphics host but still nice.

    I did got errors as well, in the terminal app. Weirdly enough it worked in the classic PowerShell console on default execution policy.

    I've only tested this on Win11 host with Win11 VM, to make it less risky driver-wise.

    Win11 Pro host (NUC8 i3) at the left, Win11 Pro VM at the right
    Tutorial: Passing through GPU to Hyper-V guest VM-gpu-vm.png

    I first got errors mounting then copying, probably due to wrong permissions.

    Also important to add:
    There on GitHub (see OP post) they use Parsec as the remote client, providing better latency and sound for cases like this. I don't use it.
    The gpu still works in the standard (vmconnect) client, without sound.
    However this didn't work in my case in enhanced session: while there is sound, the gpu wasn't used. It might work in remote desktop client but I didn't test, I heard is quite laggy since it wasn't meant for fps drawing.

    It's nice to see it running as is in the standard client. But I have no need for a third party remote client on this machine.
      My Computers


  Related Discussions
Our Sites
Site Links
About Us
Windows 10 Forums is an independent web site and has not been authorized, sponsored, or otherwise approved by Microsoft Corporation. "Windows 10" and related materials are trademarks of Microsoft Corp.

Designer Media Ltd
All times are GMT -5. The time now is 15:22.
Find Us

Windows 10 Forums