I have discovered an easy way to compacting VHDs

Page 2 of 3 FirstFirst 123 LastLast

  1. Posts : 26,407
    10 Home x64 (21H2) (10 Pro on 2nd pc)
       #11

    cereberus said:
    The Hyper-V interface seems to be a bit temperamental i.e. it does not always offer a compact option...
    It always seems to offer me the compact option, it's just that when I click Finish more often than not it returns immediately without showing any 'editing disk' progress bar.
      My Computers


  2. Posts : 5,478
    2004
       #12

    Bree said:
    It always seems to offer me the compact option, it's just that when I click Finish more often than not it returns immediately without showing any 'editing disk' progress bar.
    That just means it thinks it has nothing to do. In that case you need to try defrag the disk. I found that defrag /x (free space consolidation) and /o within the guest helps.

    The sdelete should not be required if your disk is ntfs but it seems it still is.

    Then if you run compact either from the HV manager on host or as described above it should shrink it.
      My Computer


  3. Posts : 11,244
    Windows / Linux : Arch Linux
       #13

    Hi there

    I believe you can also use raw partitions / raw disks in VM's which avoid the whole issue !!! especially as using RAW hdd's will use the native file system of the guest OS "as is" rather than using "Virtual I/O" on the OS and Windows I/O on the host (double I/O).

    Not sure why the literature recommend people avoid the use of Native HDD's in guest OS'es -- works fine for me.

    Cheers
    jimbo
      My Computer


  4. Posts : 14,170
    Windows10
    Thread Starter
       #14

    lx07 said:
    I have been doing this for a few years now:
    PHP Code:
    function ShrinkVM 
    # This function calls defgag.exe /x which will either consolidate free space and then run sdelete
    # Should only run this in VM - not on host.

    {
        
    askQuestion "Run Defrag & sdelete?"
        
    Switch ($prompt
        {    

            
    {   $Command="$env:windir\System32\defrag.exe"
                
    $Parms="C: /h /x"
                
    $Prms=$Parms.Split(" ")
                & 
    "$Command$Prms
                
                $Command
    ="$env:windir\System32\defrag.exe"
                
    $Parms="C: /h /o"
                
    $Prms=$Parms.Split(" ")
                & 
    "$Command$Prms
                
                $Command
    =$sdeletePath
                $Parms
    ="C: /z"
                
    $Prms=$Parms.Split(" ")
                & 
    "$Command$Prms
            
    }
            
    1     {Write-Host "Request cancelled - Defrag not run" -f yellow}
        }
        If (-
    not $runAll) {pressAnyKey}

    And then compacting it.

    However, while it works sdelete is very very slow and using macrium (or anything else I guess) to backup and restore an image results in a smaller vhdx and is much faster (that was @Kari's idea some time ago).

    I've not figured a way to automate a backup/restore yet so I stick to defrag (especially the consolidate free space option) followed by sdelete which while much slower is not too much worse as an end result.
    I have googled this and it is amazing none of the myriads of articles mention the defrag step!

    This is clearly the crucial step, and sdelete (as far as I can tell is secondary).

    However, you clearly sussed it ages ago.

    It feels like we have the making of a great tutorial here .
      My Computer


  5. Posts : 5,478
    2004
       #15

    cereberus said:
    sdelete (as far as I can tell is secondary).
    According to the documentation it is not required. The compact command should take care of deleted files on NTFS. (if you are compacting Linux you still need to zero free space afaik).

    However I found it doesn't work well and you need to defrag first (as you found).

    There is this Optimize-VHD which if you set the -mode flag to full may work. It appears the full parameter takes care of consolidation and zeroing but I've not tried it so can't say if it works well.

    When I tested before, save/restore worked better (but is too annoying to do regularly) - see here Compacting virtual hard disks - Windows 10 Forums
      My Computer


  6. Posts : 14,170
    Windows10
    Thread Starter
       #16

    lx07 said:
    According to the documentation it is not required. The compact command should take care of deleted files on NTFS. (if you are compacting Linux you still need to zero free space afaik).

    However I found it doesn't work well and you need to defrag first (as you found).

    There is this Optimize-VHD which if you set the -mode flag to full may work. It appears the full parameter takes care of consolidation and zeroing but I've not tried it so can't say if it works well.

    When I tested before, save/restore worked better (but is too annoying to do regularly) - see here Compacting virtual hard disks - Windows 10 Forums

    Yeah - I just wanted something that gave a reasonable compression - I actually have it off pat to get it to bare minimum using minitool partition wizard

    1) create new vhd from disk management (initialise new same as existing i.e. size and gpt/mbr)

    2) mount new vhd and existing vhd as drives

    3) shrink OS partition as far as you can (I allow 1 GB margin as shrinking to bare minimum takes a long time)

    4) delete msr partition on new vhd created when initialising

    5) copy all partitions from existing vhd to new vhd making sure you deselect partition resizing ( I reorder them so OS partition is last)

    6) Resize OS partition to fill unallocated space on new vhd

    It looks a lot but only takes a minute or so in minitool, as you can batch up all the commands in one batch, and just click accept.

    I find this is even easier than messing around with Macrium Reflect Free

    Then finally:-

    7) If a legacy bios installation, mark system partition as active

    8) Rename new partition as required

    Whilst defragging/sdeleting may be slightly less efficient space wise, I found it is much less hassle. and quicker on my system.

    I am certainly going to give your powershell script a whirl.

    Cheers

    C.
      My Computer


  7. Posts : 14,170
    Windows10
    Thread Starter
       #17

    Well, I cannot entirely claim kudos for this post now.

    Now I know defrag is a key step I searched web and guess what I found in google

    Posts from @Kyhi & @lx07 (Kudos to them)

    How to release unused space in VHDX disks? Solved - Windows 10 Forums

    How to release unused space in VHDX disks? Solved - Windows 10 Forums

    I have obviously only reinvented the wheel :-D

    It was interesting, many articles said defrag was useless but they did make the mental leap to compact afterwards.

    I will attempt to write a tutorial so this is captured for all to use, and will post a draft here for comment by cleverer peers than me .
      My Computer


  8. Posts : 26,407
    10 Home x64 (21H2) (10 Pro on 2nd pc)
       #18

    cereberus said:
    As we all know, vhds grow in physical storage size even if the vhds are much smaller in content inside the vhd....
    lx07 said:
    ...if you are compacting Linux you still need to zero free space afaik....

    I have been using this defrag/compact tip for years now to keep almost all my Hyper-V VM .vhdx files as slim as possible. Not only does it keep them from filling up the Host machine's drive, but it also dramatically reduces the time it takes and the size of my routine system images of the Host machine.

    I'm revisiting this thread now because of that 'almost' in the sentence above - one of my VMs is Linux Mint and you can't use Defrag on its EXT4 .vhdx, nor can Diskpart compact it without some help.

    Now I don't use it that often, so its Linux was long overdue for updating. I have just updated it from Linux Mint 19.1 to the newest release, Linux Mint 21. That's not possible directly, so I had to upgrade through several intermediate builds before the final upgrade to 21.

    All those upgrade had of necessity created then deleted a lot of files on the way. The .vhdx had now grown from its original size of 25.5GB to 48GB, even though the files it contain totalled little more than before I started. All that extra size was needed for the (now quite large) amount of data still in those now unused sectors.

    I can confirm that as lx07 said the key is to zero those unused sectors, only then can Diskpart compact this unknown file system. I found help to do that here:

    When a VHDX contains file systems that the VHDX driver recognizes, it can work intelligently with the contained allocation table to remove unused blocks, even if they still contain data. When a VHDX contains file systems commonly found on Linux (such as the various iterations of ext), the system needs some help......

    Because the VHDX cannot parse the file system, it can only remove blocks that contain all zeros. With that knowledge, we now have a goal: zero out unused blocks. We’ll need to do that from within the guest...

    Preferred Method: fstrim
    Usage:
    sudo fstrim /
    How to Compact a VHDX with a Linux Filesystem

    Fragmentation in Linux shouldn't be as much of an issue as it is in NTFS, but just in case it would help I also found that you can in fact easily defrag the files (but not the file system itself). The command to defrag all files in the Linux system is sudo e4defrag /

    e4defrag(8) - Linux manual page

    After using both the above commands in the guest machine Diskpart could compact my .vhdx back down to a much more acceptable 29.1GB, only 3.6GB larger than before all the upgrades.
      My Computers


  9. Posts : 3,510
    Windows 11 Pro, 22H2
       #19

    Very cool tip, thanks.

    I'm glad to hear that this works on Hyper-V as well. That's also exactly how VMware instructs their users to perform a shrink on their virtual disks, so I've been using that method for years on VMware, but never tried it with a VHD, simply because I use them so infrequently.

    But, now that I think about it, I have a Plex server running in Hyper-V (long story behind that!) that could use some love. I think I'll give this a try.

    Thanks!
      My Computers


  10. Posts : 11,244
    Windows / Linux : Arch Linux
       #20

    Bree said:
    I have been using this defrag/compact tip for years now to keep almost all my Hyper-V VM .vhdx files as slim as possible. Not only does it keep them from filling up the Host machine's drive, but it also dramatically reduces the time it takes and the size of my routine system images of the Host machine.

    I'm revisiting this thread now because of that 'almost' in the sentence above - one of my VMs is Linux Mint and you can't use Defrag on its EXT4 .vhdx, nor can Diskpart compact it without some help.

    Now I don't use it that often, so its Linux was long overdue for updating. I have just updated it from Linux Mint 19.1 to the newest release, Linux Mint 21. That's not possible directly, so I had to upgrade through several intermediate builds before the final upgrade to 21.

    All those upgrade had of necessity created then deleted a lot of files on the way. The .vhdx had now grown from its original size of 25.5GB to 48GB, even though the files it contain totalled little more than before I started. All that extra size was needed for the (now quite large) amount of data still in those now unused sectors.

    I can confirm that as lx07 said the key is to zero those unused sectors, only then can Diskpart compact this unknown file system. I found help to do that here:

    How to Compact a VHDX with a Linux Filesystem

    Fragmentation in Linux shouldn't be as much of an issue as it is in NTFS, but just in case it would help I also found that you can in fact easily defrag the files (but not the file system itself). The command to defrag all files in the Linux system is sudo e4defrag /

    e4defrag(8) - Linux manual page

    After using both the above commands in the guest machine Diskpart could compact my .vhdx back down to a much more acceptable 29.1GB, only 3.6GB larger than before all the upgrades.
    When you create a vhdx - my view is if you are using it for windows is to create one with a fixed maximum size right at the start. Then it just behaves as a standard Windows physical disk. These days even ssd sizes are big so allocating a fixed vhdx storage size shouldn't be a problem. The trick is to have the OS as small as possible and all your data etc on other disks, partitions etc.

    AS far as Linux is concerned -- if you choose the xfs file system rather than ext3 / 4 then it's "automatically self adjusting", no defragging etc required. (Better more "unbreakable" file system. Note though you can't reduce partition sizes once allocated but they can be extended with standard partition manangers like GPARTED,.

    Cheers
    jimbo
      My Computer


 

  Related Discussions
Our Sites
Site Links
About Us
Windows 10 Forums is an independent web site and has not been authorized, sponsored, or otherwise approved by Microsoft Corporation. "Windows 10" and related materials are trademarks of Microsoft Corp.

© Designer Media Ltd
All times are GMT -5. The time now is 04:22.
Find Us




Windows 10 Forums