A few weeks ago, I was catching up on the forum and saw some reports of performance issues, with the recommendation to increase the blocksize for larger backups. When I created my backup years ago, I didn’t know about the option of blocksize (it certainly doesn’t help that it is hidden in advanced options), so I used the default. Because the size keeps growing, I decided that I should increase the blocksize, but I also didn’t want to lose my backup history.
Inspired by this post,
I extended the existing recovery tool to be able to completely rewrite an existing (local) backup with a new blocksize. The tool can be run on an encrypted local backup, or on one downloaded and decrypted as described on the disaster recovery manual page (at least in theory). The process is described in more detail on the GitHub release page:
With this, a day of processing time and the sacrifice of a few TB write cycles on my SSD, I successfully increased the blocksize of my 200GB backup from 100KB to 1MB (unless something is broken and I didn’t notice). Since others might be interested, I am releasing it as a standalone download.
I am not comfortable putting this into the main release of Duplicati, because it is mostly untested. If you decide to try it, I would appreciate feedback on whether or not it worked. Maybe it could become a part of the main program one day, if there is enough evidence that it does not break the backups.
Use at your own risk. I wrote this mainly for myself and tested this process once. It appears to have worked in my case, but I interrupted it several times and made code changes. The safe way is always to create a new backup with increased blocksize.
This will create a new copy of your backup set. DO NOT delete the original copy of your backup! I hope that if something is silently broken and you don’t catch it initially during testing, at least new data added with the new blocksize should be reasonably safe. Then there might be a way to only restore new files and use the original backup for older files later on.