New
#11
I created an account just give you kudos. This worked perfectly and on the first try, nice work.
I created an account just give you kudos. This worked perfectly and on the first try, nice work.
Solution for the "New-Item : The given path's format is not supported" error, and all the other errors following it:
1) Launch diskmgmt.msc on the host and mount the virtual machine's .VHDX disk,
2) Assign any available drive letter to that partition on the mounted disk, that contains the operating system files (i.e. the \Windows directory),
3) Unmount the virtual disk,
4) Launch the "Update-VMGpuPartitionDriver.ps1" script again.
Thanks.
Tried it, it solves the drive issue indeed.
- - - Updated - - -
UPDATE:
Note on the copying of the drivers from host via script:
(I tried this on Win11 host and with a Win11 VM previously, with success.)
Now I re-tried on Win11 host and with a Win10 VM:
-> the driver still needs to be copied from host otherwise the device will not work (seen in device manager with yellow triangle)
-> the copy process works BUT there are still errors in the script: access denied by copying some syswow64 dlls (vulkan...) to the VM mounted drive; somehow even with admin rights the mounted device has protected files in system directories, I miss something here but I didn't looked into it further as some files were copied.
=> however as this VM was actually installed on the host before and being restored in the VM from Macrium Reflect image, the gpu is known already so the gpu partitioning actually works despite of the 'vulkan' copy errors seen, meaning that the core driver files were copied and other previous files were already available.
In this case the Win10 VM is a clone of the host before it was upgraded to Win11.
Here's a preview.
The VM, at the right in screenshot, is outdated at the moment but the procedure worked:
For the gpu partitioning to work I can conclude that one needs the exact same driver in host and guest.
From Win11 host to Win10 vm it worked in this case because the Intel Iris Graphics 655 (my GPU) has same driver for both Win10 and 11.
I tried this in a Win8.1 VM and indeed the device was detected in device manager but there are no drivers available in Win8.1 for my GPU.
Important:
[edit] the main requirement is: this needs a generation 2 vm, in other words: requires Win8.1 or higher installed in VM.
This didn't worked in a live Linux Distro: only the generic Hyper-V gfx device is seen.
And this didn't work in a Windows PE environment either.
- - - Updated - - -
It seems not to work with Linux guest.
This only accepts Win8.1 or higher as guest and SAME driver version and files in host and guest.
The partitioned GPU will get a different hardware id, logically as this isn't the entire GPU only 'a part' of it so it's a different device but with access to host capable graphics.
Installing the driver manually in VM, through Windows update or by manufacturer driver installer will fail, so only the copy of driver from host to guest seems to work.
I created an account just to say thank you. I was able to get this working and my virtual machine now shows my Radeon GPU installed in the host.
Using the tutorial, a few things tripped me up along the way:
- First I needed to set Powershell to be able to execute unsigned scripts
- When right-clicking on the 2 files from Github and saving them, it saves the files with content that are *not* the script.
Upon executing the script I was shown a bunch of errors. What I needed to do was open the files on Github and copy/paste the content into text files manually.
- Upon starting the Virtual Machine it gave an error about not being able to create a checkpoint (definitely was not happening prior, I start this machine daily). To get around this I disabled checkpoints for the VM.
Here are the steps I took, based on the tutorial:
# Pass Through the GPU from the Host to the Virtual Machines
# First you may need to change Powershell to allow scripts to execute
1. Open Powershell prompt as an administrator
2. Enter this command:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
3. Enter this command:
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
1. Test to see if your GPU can be partitioned at all. On the Host, open a Powershell prompt as administrator. Then run:
Get-VMPartitionableGpu (win10)
or
Get-VMHostPartitionableGpu (win11)
2. Open up Device Manager on the guest VM, and check the Display Adapters. You will see that your GPU is NOT enabled.
Then shut down the VM.
VM_BOB shows: Microsoft Hyper-V Video
Microsoft Remote Display Adapter
3. Go here: GitHub - jamesstringerparsec/Easy-GPU-PV: A Project dedicated to making GPU Partitioning on Windows easier!
And download these two files:
Add-VMGpuPartitionAdapterFiles.psm1
Update-VMGpuPartitionDriver.ps1
Note: Right-clicking and downloading these files may not work (you'll get script errors).
Open up the files on Github directly and copy/paste the contents into a text file with the names above.
4. From the admin powershell console, change to the directory where you saved the 2 files above and run this command:
(First, replace "Name of your VM" with the name of your virtual machine...
.\Update-VMGpuPartitionDriver.ps1 -VMName "Name of your VM" -GPUName "AUTO"
example:
.\Update-VMGpuPartitionDriver.ps1 -VMName "VM_BOB" -GPUName "AUTO"
5. - With the VM still off, create a new .ps1 powershell file on the host named runme.ps1
- Copy and paste the code below and save it as runme.ps1
(Save it to the same directory as your .psm1 files from step 3)
- Edit the first line and once again put the name of your VM. (VM_BOB in my example)
$vm = "VM_BOB"
if (Get-VMGpuPartitionAdapter -VMName $vm -ErrorAction SilentlyContinue) {
Remove-VMGpuPartitionAdapter -VMName $vm
}
Set-VM -GuestControlledCacheTypes $true -VMName $vm
Set-VM -LowMemoryMappedIoSpace 1Gb -VMName $vm
Set-VM -HighMemoryMappedIoSpace 32Gb -VMName $vm
Add-VMGpuPartitionAdapter -VMName $vm
- Now execute runme.ps1 in Powershell admin with the following command:
.\runme.ps1
6. Now we should have the drivers copied into the VM, and the GPU partitioning feature enabled. You can now turn on the VM, and go back to Device Manager and see if your GPU is now shown under Display Adapters
NOTE: When you go to turn your virtual machine back on, you may encounter an error about not being able to create a checkpoint.
The only solution I've found is to turn off checkpoints for the VM...
Hyper-V Manager
Click on your virtual machine -> Settings ->
Click Checkpoints in the Management section.
Change the type of checkpoint by selecting the appropriate option (change Production checkpoints to Standard checkpoints).
I also just created an account to thank the both of you. I think I had issues with my first attempt because I downloaded the repo. However I followed @mapsnrop for my second attempt about a week later and now my 3070 is showing in my VM.
Check points are disabled. My only potential suggestion is using exports as a VM backup method. In case any noobs like myself stumble across this thread.
Thanks again guys.
I also made an account explicitly to thank you for this post, as it more closely aligns with my use case of using an existing VM (recently converted to VHDX from VMDK) as a software developer, and needing to leverage my laptop’s discrete GPU for rendering/Unreal etc. A few things I came across that may help others:
1) Initially I received some script errors. Upon investigating, I realized that the script author makes the assumption that there will be only a single disk on the target VM. In my case, I have a disk for the OS and another for Projects, so his script was trying to concatenate them together. To fix this, I modified (forgive me I don’t have to code open at the moment) the ‘where’ expression pipe to use a filter. So for instance, “where $drive_var -eq “E” (pipe to remaining code). This solved the problem.
2) it’s non negotiable for me that I must use enhanced sessions, as I pass a lot of devices over vm connect. Thus far, everything seems to still work fine. However, early on, I found that program like Blender would have black screen distortions when resizing windows in the guest. To solve this, I also changed settings in the host Nvidia panel to “prefer discrete GPU with the vmconnect.exe process.” This also seemed to resolve the issue, and Blender correctly picks my NVIDIA card as the render device in-program as well.
3) At this time, I see 3 display adapter in the guest - Hyper V video, the remote one, and my RTX 3060. It claims that the RTX is being used for the 2 render attributes only, while the others show accelerated 3D. However, various tests with OpenGL extensions, chrome://gpu, and compiling C++ programs with graphics, show that indeed the NVIDIA cards attributes/Vulkan are correctly detected. Additionally testing a 4K YouTube video in the guest shows NVIDIA decode load on the host properly. So I’m not sure why that is. As I do more full fledged programming with heavy geometry I will report on any additional gotchas