Verifying Remote Data

The cloud files list is just a list of what’s in the cloud folder as if you browsed it - so it would be a bunch of “duplicati-*” files that have dlist, dindex, or dblock in the names.

Initial backups are always slow because every 50kb (default) block of every file has to be read, hashed, encrypted, compressed, uploaded, and recorded in the local database.

While within normal usage limits of Duplicati, 25.5TB of source data means there will end up being about 11 million hash rows stored in the local database. Again, during initial backup writing all of those takes a while. And what’s worse, as more hashes are written - the reading to see if a hash already exists gets slower and slower. (There is an update coming to help optimize this, but I don’t know when it will happen.)

Once the initial backup is done, the database settles down to mostly reads and things run quite a bit faster.

As for a 1GB upload volume (dblock) size, that is something people have done - but usually only for local backups and when source files themselves are multiple gigs in size.

The main reason it’s probably not a good idea for you is that when you go to restore a file, no matter the size, the entire dblock file has to be downloaded so the individual blocks of the file can be pulled out and reassembled. With a 1TB dblock size, that means restoring a 10MB JPG would have to start with a 1TB download. Ouch!

Continuing with that example, a 10MB JPG would itself likely have been chopped into 205 blocks. If all the blocks happened to have been stored in a single dblock that’s great - only 1TB needs to be downloaded. But if you made a change to that JPG, it’s likely the updates would be stored in a different dblock file so now you’re got 2TB of downloads just to restore that 10MB JPG.

And there’s similar stuff like if you’re using retention policy to thin out older versions that compacting process also downloads multiple 1TB dblock files so they can be recompressed and reuploaded as fewer dblock files.

Sorry - I’ve probably gone on a little to much about that process. :blush:

Note that you can change the dblock (upload volume) size any time you want and new files create will be done with the new dblock size. Duplicati is fine with mixed dblock sizes on the destination. So if you want to stop the backup (I’d suggest using “stop after current upload”) and change from the default 50MB to something bigger just to see what happens, that should work just fine.

2 Likes