New
#1
Tutorial: Passing through GPU to Hyper-V guest VM
I've learned so much about Hyper-V from all of the tutorials by @Kari and @Brink; that I hope I can return the favor here. I've created an account just to post this.
My original problem: In the guest VM, my audio was out of sync when watching videos on youtube and twitter. Initially I thought it was an audio problem with the VM. Then it dawned on me that its just as likely that the video is out of sync, rather than the audio. Maybe my VM didn't have video hardware acceleration or something...
Initial searches showed that Hyper-V actually used to have this as an option within the GUI. It was called "RemoteFX Video Adapter" that you would add under the VM settings. However, due to a security issue, this feature was removed by Microsoft.
Next, I found something called "Discrete Device Assignment" (DDA). This looked promising, until I found out that this feature was only available on Windows Server OS, while I am on a desktop OS with Win10 Professional 21H2. Further, this DDA method might exclusively assign the device to a VM, rather than sharing it with the host or with other VMs.
GPU Partitioning
Finally, somehow I stumbled upon something called "GPU Partitioning". That is what this post is about. It is a method to partition off resources from your graphics card so that it can be used inside your VMs. Unfortunately, there is almost no documentation for this from Microsoft. But I've compiled together this information from all my searches.
You can enable this feature with a some powershell commands. But the difficult part is to get the video drivers working inside the Guest VM. You DO NOT install any video drivers in the VM. Instead, you must copy the existing drivers from the host machine into the same location in the VM. If you get a "Code 43" error in Device Manager inside the VM, its likely a driver issue. Luckily, some people have scripted this to save us the hassle.
Requires an existing Generation 2 VM.
Steps:
- Test to see if your GPU can be partitioned at all. On the Host, open a Powershell prompt as administrator. Then run:
Get-VMPartitionableGpu
(win10)
or
Get-VMHostPartitionableGpu
(win11)
- Open up Device Manager on the guest VM, and check the Display Adapters. You will see that your GPU is NOT enabled.
Then shut down the VM.
- Go to this link: GitHub - jamesstringerparsec/Easy-GPU-PV: A Project dedicated to making GPU Partitioning on Windows easier!
You can download the full repo if you want. But you only need these two files:
- Add-VMGpuPartitionAdapterFiles.psm1
- Update-VMGpuPartitionDriver.ps1
- From the admin powershell console, run this command:
.\Update-VMGpuPartitionDriver.ps1 -VMName "Name of your VM" -GPUName "AUTO"
Just edit that command with the name or your VM. GPU "AUTO" will automatically determine your GPU. These scripts will find all the driver files from your host machine, and copy the files to the VM. This can take some time.
- With the VM still off, create a new .ps1 powershell file on the host, and paste in this code:
This script will enable the GPU partitioning for your VM, and turn on some required settings.Code:$vm = "Name of your VM" if (Get-VMGpuPartitionAdapter -VMName $vm -ErrorAction SilentlyContinue) { Remove-VMGpuPartitionAdapter -VMName $vm } Set-VM -GuestControlledCacheTypes $true -VMName $vm Set-VM -LowMemoryMappedIoSpace 1Gb -VMName $vm Set-VM -HighMemoryMappedIoSpace 32Gb -VMName $vm Add-VMGpuPartitionAdapter -VMName $vm
- Edit the first line and again put the name of your VM. Then run this script file in your powershell prompt by preceeding the filename with
.\
just like you did with the previous script above.
- Now we should have the drivers copied into the VM, and the GPU partitioning feature enabled. You can now turn on the VM, and go back to Device Manager and see if your GPU is now shown under Display Adapters
My solution above is to enable your GPU on an already existing VM. If you are willing to create a new VM from scratch, you could use all the files from the github repository in Step 3. However it also installs some extra software that you might not need.
As far as I understand, everytime you update your graphics drivers on your host, you will similarly need to copy those new drivers into the guest VMs as well. We simply repeat Step 4 to do this.
In the resource links below, you will see that there are other partitioning settings that you can play with, but they were unnecessary for me. Also I read that this GPU passthrough feature would require that I turn off both dynamic memory, and checkpoints. But this was not required for me. I left both enabled, and got no errors. If you get errors, try turning those off.
And to bring it full circle, after doing this, my GPU is correctly shown in Device Manager, and my audio/video lag on youtube inside the VM is gone.
Resources:
GPU partitioning is finally possible in Hyper-V : sysadmin
I made a Powershell script to automate the creation of GPU-P enabled Hyper V VMs - Servers and NAS - Linus Tech Tips
GPU Virtualization with Hyper-V – James' Personal Site
Running FiveM in a Hyper-V VM with full GPU performance for testing ("GPU Partitioning") - Cookbook - Cfx.re Community
https://forum.level1techs.com/t/2-ga...-hyperv/172234