Best way to change the block size

Hi. I want to change the block size of some backups for performance reasons. I understand that I can’t change the block size of an existing backup, I need to create a new one. What is the best way to go about that?

I want the new backup to have exactly the same settings as the old one, just with a bigger block size.

I see you can export and then re-import a configuration, changing the local DB name in the process. I guess then I would have to manually delete all the remote files. Is that the best way to do it? Can I not modify the original somehow and get it to regenerate everything?

Yeah, if you don’t need to retain the existing backups then I would do this:

  • Delete all existing files on the back end
  • Click on the backup job in the web UI to expand options, click the blue “Database…” link, then click the Delete button
  • Edit the backup job and change your block size

If the back end (remote) side has no Duplicati files, and there is no local database, then Duplicati will behave as if it’s doing its very first backup.

1 Like

Thanks, I’m trying that now. It’s uploading now, still says 13 versions available but presumably that’s just the UI needing to wait for the backup to complete before updating.

Correct, the stats will update once the job completes.

Thanks, can confirm it worked and display updated. By increasing the block size to 10MB (from the default 100k) the database shrank from 4.6GB to 380MB.

1 Like

Out of curiosity how much data are you backing up? (Source size)

That particular one is about 1.5TB.

Yeah good call on using a larger block size!

I was going to comment on the bug about increasing the default size. To my mind it might as well be larger since the most common type of duplicate is a whole file, and while files smaller than the block size get their own block.

Feel free. I think I’ve been expressing opinions. If nothing else, small blocks mean bigger databases.
While I’m completely thrilled at the idea that the SQL might scale better, DB sizes might still matter…

but unless the big one was also a clean start, note databases could grow, benefit from vacuum, etc.
The database also holds the job logs, so if comparing blocksize effects, be aware of such factors…

One possible other argument against small blocksizes is that compact can move blocks around, so possibly restore would have to grab more dblock files to get all the scattered blocks to do a restore.

This could all benefit from extensive surveying, study, and simulation, but who’s going to do all that?
Whole-file duplicates can greatly benefit from dedup, but for some people, block-level may help too.