Recommended Settings and Questions

I have 15TB of data to backup and there is about a 10% change daily. What would be the best settings for block size and volume size if it is being sent to cloud store like wasabi or b2?

Most of it contains files which have internal compression. Should I turn off compression or just add those extensions to the excluded list?

Also, when I tested duplicati before, it seems like it was opening/reading files that had not changed. Is that by design or is there something I’m doing that is making it do that?

Hello and welcome!

For such a large backup I would set the deduplication block size to probably 15MB. The default (100KB) is too small and will result in a large, inefficient local database.

For the volume size, it shouldn’t matter too much. I use the default of 50MB on all my systems (B2 back end). The main reason you might need to change this is if you use a back end that is limited on the number of files it can hold. B2, S3, Wasabi, and many others have no limit. You could increase this from 50MB if you want to reduce the number of files. It may improve performance of uploads, but at the expense of needing more temp space to process backup operations, and requiring more data to be downloaded for restores. Remember Duplicati must download at least one volume to restore even the smallest file.