Tutorial: Passing through GPU to Hyper-V guest VM

Page 2 of 5 FirstFirst 1234 ... LastLast

  1. Posts : 1
    Win11
       #11

    I created an account just give you kudos. This worked perfectly and on the first try, nice work.
      My Computer


  2. Posts : 1
    many
       #12

    NavyLCDR said:
    I get all kinds of script errors in step #4.

    Solution for the "New-Item : The given path's format is not supported" error, and all the other errors following it:

    1) Launch diskmgmt.msc on the host and mount the virtual machine's .VHDX disk,
    2) Assign any available drive letter to that partition on the mounted disk, that contains the operating system files (i.e. the \Windows directory),
    3) Unmount the virtual disk,
    4) Launch the "Update-VMGpuPartitionDriver.ps1" script again.
      My Computer


  3. Posts : 1,325
    Windows 11 Pro 64-bit
       #13

    McWz said:
    Solution for the "New-Item : The given path's format is not supported" error, and all the other errors following it:

    1) Launch diskmgmt.msc on the host and mount the virtual machine's .VHDX disk,
    2) Assign any available drive letter to that partition on the mounted disk, that contains the operating system files (i.e. the \Windows directory),
    3) Unmount the virtual disk,
    4) Launch the "Update-VMGpuPartitionDriver.ps1" script again.
    Thanks.
    Tried it, it solves the drive issue indeed.

    - - - Updated - - -
    UPDATE:
    Note on the copying of the drivers from host via script:

    (I tried this on Win11 host and with a Win11 VM previously, with success.)

    Now I re-tried on Win11 host and with a Win10 VM:
    -> the driver still needs to be copied from host otherwise the device will not work (seen in device manager with yellow triangle)
    -> the copy process works BUT there are still errors in the script: access denied by copying some syswow64 dlls (vulkan...) to the VM mounted drive; somehow even with admin rights the mounted device has protected files in system directories, I miss something here but I didn't looked into it further as some files were copied.
    => however as this VM was actually installed on the host before and being restored in the VM from Macrium Reflect image, the gpu is known already so the gpu partitioning actually works despite of the 'vulkan' copy errors seen, meaning that the core driver files were copied and other previous files were already available.

    In this case the Win10 VM is a clone of the host before it was upgraded to Win11.
    Here's a preview.
    The VM, at the right in screenshot, is outdated at the moment but the procedure worked:
    Tutorial: Passing through GPU to Hyper-V guest VM-vm-10.png

    For the gpu partitioning to work I can conclude that one needs the exact same driver in host and guest.
    From Win11 host to Win10 vm it worked in this case because the Intel Iris Graphics 655 (my GPU) has same driver for both Win10 and 11.
    I tried this in a Win8.1 VM and indeed the device was detected in device manager but there are no drivers available in Win8.1 for my GPU.

    Important:
    [edit] the main requirement is: this needs a generation 2 vm, in other words: requires Win8.1 or higher installed in VM.
    This didn't worked in a live Linux Distro: only the generic Hyper-V gfx device is seen.
    And this didn't work in a Windows PE environment either.

    - - - Updated - - -

    D3vil0p3r said:
    I have only one doubt... If I have a Linux guest, this method should not work... right? It works only if I have Windows host and my guest is a Windows OS?
    It seems not to work with Linux guest.
    This only accepts Win8.1 or higher as guest and SAME driver version and files in host and guest.

    The partitioned GPU will get a different hardware id, logically as this isn't the entire GPU only 'a part' of it so it's a different device but with access to host capable graphics.
    Installing the driver manually in VM, through Windows update or by manufacturer driver installer will fail, so only the copy of driver from host to guest seems to work.
      My Computers


  4. Posts : 1
    Windows 10 Pro
       #14

    I created an account just to say thank you. I was able to get this working and my virtual machine now shows my Radeon GPU installed in the host.

    Using the tutorial, a few things tripped me up along the way:

    - First I needed to set Powershell to be able to execute unsigned scripts

    - When right-clicking on the 2 files from Github and saving them, it saves the files with content that are *not* the script.
    Upon executing the script I was shown a bunch of errors. What I needed to do was open the files on Github and copy/paste the content into text files manually.

    - Upon starting the Virtual Machine it gave an error about not being able to create a checkpoint (definitely was not happening prior, I start this machine daily). To get around this I disabled checkpoints for the VM.

    Here are the steps I took, based on the tutorial:

    # Pass Through the GPU from the Host to the Virtual Machines

    # First you may need to change Powershell to allow scripts to execute

    1. Open Powershell prompt as an administrator

    2. Enter this command:

    Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

    3. Enter this command:

    Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass




    1. Test to see if your GPU can be partitioned at all. On the Host, open a Powershell prompt as administrator. Then run:

    Get-VMPartitionableGpu (win10)
    or
    Get-VMHostPartitionableGpu (win11)


    2. Open up Device Manager on the guest VM, and check the Display Adapters. You will see that your GPU is NOT enabled.
    Then shut down the VM.


    VM_BOB shows: Microsoft Hyper-V Video
    Microsoft Remote Display Adapter


    3. Go here: GitHub - jamesstringerparsec/Easy-GPU-PV: A Project dedicated to making GPU Partitioning on Windows easier!

    And download these two files:

    Add-VMGpuPartitionAdapterFiles.psm1
    Update-VMGpuPartitionDriver.ps1

    Note: Right-clicking and downloading these files may not work (you'll get script errors).

    Open up the files on Github directly and copy/paste the contents into a text file with the names above.


    4. From the admin powershell console, change to the directory where you saved the 2 files above and run this command:

    (First, replace "Name of your VM" with the name of your virtual machine...

    .\Update-VMGpuPartitionDriver.ps1 -VMName "Name of your VM" -GPUName "AUTO"

    example:

    .\Update-VMGpuPartitionDriver.ps1 -VMName "VM_BOB" -GPUName "AUTO"


    5. - With the VM still off, create a new .ps1 powershell file on the host named runme.ps1

    - Copy and paste the code below and save it as runme.ps1
    (Save it to the same directory as your .psm1 files from step 3)

    - Edit the first line and once again put the name of your VM. (VM_BOB in my example)

    $vm = "VM_BOB"
    if (Get-VMGpuPartitionAdapter -VMName $vm -ErrorAction SilentlyContinue) {
    Remove-VMGpuPartitionAdapter -VMName $vm
    }
    Set-VM -GuestControlledCacheTypes $true -VMName $vm
    Set-VM -LowMemoryMappedIoSpace 1Gb -VMName $vm
    Set-VM -HighMemoryMappedIoSpace 32Gb -VMName $vm
    Add-VMGpuPartitionAdapter -VMName $vm



    - Now execute runme.ps1 in Powershell admin with the following command:

    .\runme.ps1


    6. Now we should have the drivers copied into the VM, and the GPU partitioning feature enabled. You can now turn on the VM, and go back to Device Manager and see if your GPU is now shown under Display Adapters

    NOTE: When you go to turn your virtual machine back on, you may encounter an error about not being able to create a checkpoint.

    The only solution I've found is to turn off checkpoints for the VM...

    Hyper-V Manager

    Click on your virtual machine -> Settings ->

    Click Checkpoints in the Management section.

    Change the type of checkpoint by selecting the appropriate option (change Production checkpoints to Standard checkpoints).
      My Computer


  5. Posts : 2
    win10
       #15

    Starlet7222 said:
    I've learned so much about Hyper-V from all of the tutorials by @Kari and @Brink; that I hope I can return the favor here. I've created an account just to post this.
    mapsnrop said:
    I created an account just to say thank you. I was able to get this working and my virtual machine now shows my Radeon GPU installed in the host.
    I also just created an account to thank the both of you. I think I had issues with my first attempt because I downloaded the repo. However I followed @mapsnrop for my second attempt about a week later and now my 3070 is showing in my VM.

    Check points are disabled. My only potential suggestion is using exports as a VM backup method. In case any noobs like myself stumble across this thread.

    Thanks again guys.
      My Computer


  6. Posts : 7
    Windows 10
       #16

    I also made an account explicitly to thank you for this post, as it more closely aligns with my use case of using an existing VM (recently converted to VHDX from VMDK) as a software developer, and needing to leverage my laptop’s discrete GPU for rendering/Unreal etc. A few things I came across that may help others:

    1) Initially I received some script errors. Upon investigating, I realized that the script author makes the assumption that there will be only a single disk on the target VM. In my case, I have a disk for the OS and another for Projects, so his script was trying to concatenate them together. To fix this, I modified (forgive me I don’t have to code open at the moment) the ‘where’ expression pipe to use a filter. So for instance, “where $drive_var -eq “E” (pipe to remaining code). This solved the problem.

    2) it’s non negotiable for me that I must use enhanced sessions, as I pass a lot of devices over vm connect. Thus far, everything seems to still work fine. However, early on, I found that program like Blender would have black screen distortions when resizing windows in the guest. To solve this, I also changed settings in the host Nvidia panel to “prefer discrete GPU with the vmconnect.exe process.” This also seemed to resolve the issue, and Blender correctly picks my NVIDIA card as the render device in-program as well.

    3) At this time, I see 3 display adapter in the guest - Hyper V video, the remote one, and my RTX 3060. It claims that the RTX is being used for the 2 render attributes only, while the others show accelerated 3D. However, various tests with OpenGL extensions, chrome://gpu, and compiling C++ programs with graphics, show that indeed the NVIDIA cards attributes/Vulkan are correctly detected. Additionally testing a 4K YouTube video in the guest shows NVIDIA decode load on the host properly. So I’m not sure why that is. As I do more full fledged programming with heavy geometry I will report on any additional gotchas
      My Computer


  7. Posts : 6
    Windows 10
       #17

    Hi guys.

    Could somebody help me out please?
    I am trying to get Blender running on my VM and so I did all the steps and seemingly all the script run fine, but when I try to start my VM then it gets "stuck" on the black screen with "Hyper-V" on it .. and seems to be frozen...

    If I remove the GPU partitioning from the Host machine - then I can start up my VM and in the Device Manager there are three adapters:
    - Microsoft Hyper-V Video - ENABLED
    - Microsoft Remote Display Adapter - DISABLED
    - NVIDIA Quadro T1000 - DIAABLED

    Not sure if this is relevant but the host laptop has two different Display Adapters
    1) Intel(R) UHD Graphics
    2) NVIDIA Quadro T1000

    Any ideas what may be going wrong here?
    What can I do to diagnose the problem?
    The VM is definitively running because I can connect to the VNC server that is running on it - what I see though is just black screen.
    Seems that something is wrong with the Video Adapter(s) setup.

    Any help would be appreciated.
    Last edited by sylvesp; 23 Jun 2023 at 10:29.
      My Computer


  8. Posts : 7
    Windows 10
       #18

    No, see my reply above. I’m able to run Blender 3.5 with NVIDIA
    RTX just fine also using a dual GPU laptop. With the adapters disabled, I’d first check the folder HostDrivers (I believe it’s called) under System32 inside the VM to ensure ini files were copied over correctly, as I mentioned above I found that I needed to modify the copy script a bit for my uses, since I use a VM with 2 virtual disks and such. Also run “winver” on both guest and host to ensure you’re on the same patch version of Windows. Might not matter, but whenever I run a security update on one, I do the same for the other
      My Computer


  9. Posts : 6
    Windows 10
       #19

    Huberdoggy said:
    No, see my reply above. I’m able to run Blender 3.5 with NVIDIA
    RTX just fine also using a dual GPU laptop. With the adapters disabled, I’d first check the folder HostDrivers (I believe it’s called) under System32 inside the VM to ensure ini files were copied over correctly, as I mentioned above I found that I needed to modify the copy script a bit for my uses, since I use a VM with 2 virtual disks and such. Also run “winver” on both guest and host to ensure you’re on the same patch version of Windows. Might not matter, but whenever I run a security update on one, I do the same for the other
    Thanks Huberdoggy.

    I did check and the files look to be at the right place.. winver also shows same version.
    Still getting only a black screen
      My Computer


  10. Posts : 7
    Windows 10
       #20

    Try altering the value for 32 bit MIMO (LowMapped IO) in the final script to 3GB. Not sure if the Quadro is different, but that’s what I set for mine.
      My Computer


 

  Related Discussions
Our Sites
Site Links
About Us
Windows 10 Forums is an independent web site and has not been authorized, sponsored, or otherwise approved by Microsoft Corporation. "Windows 10" and related materials are trademarks of Microsoft Corp.

© Designer Media Ltd
All times are GMT -5. The time now is 08:22.
Find Us




Windows 10 Forums