- For the Web server content i would keep the default settings.
Increasing the block size (100KB) will not help much, because most files are not larger than 1 or 2 100KB blocks.
The DBlock size does not have to be increased, 60 GB of source data, the default setting of 50MB will generate 1000-1500 DBlock files. - VHD files: I would suggest to choose a block size that is a multiple of the filesystem block size inside your VHD file. This will (hopefully) upload only changed VHD filesystem blocks if a file is added/modified inside the VHD container. Not 100% sure if this will work the same for thin provisioned VHD files and VHDX files (not sure about the file sturcture of VHD files), but I guess using this rule of thumb will not hurt.
If your NTFS partition uses a block size of 64KB, you could choose a blocksize of 64KB, 128KB, 192KB or 256KB, maybe more if you have very large VHD files. - Music collection:
I would choose to increase block size and DBlock size a bit more, for example a block size of 1MB and a DBlock size of 2GB.
I suppose chances are small that deduplication will happen (all compressed audio files), so a small block size will only result in more overhead.
A larger DBlock will result in a smaller list of files at the backend. Backdraw is that restore operations will require more data to download (Complete 2GB file if it contains a fragment of the data that must be restored), but I gues chances are small that you will need to restore 3 or 4 selected MP3’s from your collection of 140000 files.
If you want to know what approximately the results are if you changes (D)Block settings, you can use the Duplicati Storage Calculator mentioned here: