200TB backups feasible with Duplicati?

I just setup Duplicati to backup around 200TB from a ZFS array to a local MinIO cluster. Am I asking for pain or will the system be able to backup and restore from these sizes?

The 2tb windows box seems fine…just wondering if 100x that will make it barf. :slight_smile:

At a minimum you’d want to increase the deduplication block size. The default of 100KiB would result in an enormous number of blocks to track. Increasing the dedupe block size will reduce dedupe efficiency but result in less blocks to track, which helps performance and keeps the sqlite database smaller.

On my system my larger backups are near 1TiB and I use a 1MiB block size. I’m not sure what the optimal size would be in your case, but 100MiB may not be out of line.

You should also probably increase the remote volume size (default is 50MiB). Maybe 1GiB would be better especially since it’s a local target.

Good luck.

Thanks for the input. I’ll be giving it a go after I move some disks around.