Thank you. Fortunately I am still at the experimental stage of setting this up, so I don't have any files on the volume to back up.
I decided to go back to square one. I removed the virtual disk and the storage pool, reinitialized the physical disks, and replaced one of the disks with a larger one so that the setup wasn't identical.
I then recreated the storage pool using the same PowerShell command as before (see earlier post). The size of the pool was now 3.98TB, very slightly smaller than the sum of the physical disk sizes. So far so good.
I created a new volume, again using the same command as before (with UseMaximumSize). This resulted in a volume/storage space of 2.72TB, about 70% of the available space in the pool. This is about the same proportion as last time, so it seems the behaviour is consistent, albeit consistently wrong!
I removed this volume, and followed your suggestion by creating another new volume, this time specifying the size manually to be very slightly less than the total pool size:
Code:
New-Volume -StoragePoolFriendlyName "MyStoragePool" -FriendlyName "MyVolume" -ResiliencySettingName "Simple" -FileSystem NTFS -AccessPath "M:" -ProvisioningType Fixed -Size 3.97TB
I was pleased to find that this worked and gave me a volume of the correct size, using almost all the space in the pool. Which is great, but it does seem that there is a bug with the -UseMaximumSize parameter in this context.
However, I have not tested whether I can actually copy 3.97TB of data to the volume. Perhaps I should do that, as in theory the storage space is supposed to be able to be larger than the storage pool.