Is my initial backup too large? Thinking about Duplicacy or qbackup instead

Initial backup should be able to be repaired points to developer comments about what should happen in general. Ideally the next backup resumes where it stopped, but it may depend on how prior backup fails.

More ideally it even creates a synthetic file list at resume’s start showing where the previous one ended, however I explain there why I think that’s not working. I wonder if fixing it would help initial backup issue?

Choosing sizes in Duplicati offers some advice on the block size and remote volume size question. The –blocksize default of 100KB should probably be higher unless you want your 5 TB divided into 50 million blocks that the database has to track. Operations get slow when the database gets big. Because you’re backing up mostly media which possibly has few block-level similarities between files, deduplication loss might not be a huge loss, and very different files might even help because otherwise incremental change could even wind up in different huge dblock files. You might need to download a lot to gather the changes. There’s a tradeoff between the burden from a large set of small dblocks, and a small set of large dblocks. There’s never a partial download, so there’s some merit to the idea of smaller dblock for smaller sources.

1 Like