found this on the web
its a couple of years old (May 17) but the writer was evaluating various backup solutions and noted that if lost of the database then restores on non standard block sizes could fail.
This is EXACTLY my use case - so very concerning! My setup is basically my photo/video collection over my life - about 300gb (lots of baby videos haha) which I backup with 800mb blocks (just to avoid having huge folder with lots .aes files) and the purpose is that in a hard-disk-has-died scenario, I can get my (irreplaceable!) photos
Wondering if anyone encounter similar problems as person described and/or similar setup and actually went through restores and thoughts about how to make robust?
Seems like nobody who’s stopped by (not everybody follows the forum) is reporting current problems. Possibly the best thing to do is to try your own restore from either your actual backup or a similar test.
Testing recovery from hard-disk-has-died is a good idea. Sometimes people get surprised at how slow database recreation can run. Doing a direct restore to another computer helps ensure that works right. Database recreation actually got a speed-up recently, but it’s still in canary, not in any beta release yet.
There was once a problem somewhat like the one you referred to. 184.108.40.206-220.127.116.11_canary_2018-06-30:
Fixed an issue where restores from the GUI would not autodetect blocksize and other parameters
Unable to restore backup: blocksize mismatch in manifest #2323 was (I think) the issue that was fixed.
Choosing sizes in Duplicati is possibly worth reviewing. Your 800 MB sounds like –dblock-size, as from “Remote volume size” in the GUI, and called “Upload volume size” in the manual. This size varies a lot, especially on the last volume of the backup, and Duplicati should handle whatever the backup job made.
–blocksize which defaults to 100 KB is the size that has to stay stable. Choose per above, then leave it.
“Invalid manifest detected, the field Blocksize has value 512000 but the value 102400 was expected” is reporting a remaining problem in 18.104.22.168. That led to an update in the GitHub issue that claimed the fix. Testing 22.214.171.124, I think I’m seeing this on direct restore. The original fix seemed to allow updating of the parameters from the manifest. Maybe I should test 126.96.36.199 to see if maybe its fix has become unfixed…
EDIT 2: Using Advanced Options --blocksize=500KB on screen 2 of the direct restore sequence fixed it. Your --blocksize might be different, or maybe you’re at default, and a larger remove volume won’t matter.
looks like the best way to stay out of trouble - i ended up exporting the configuration files and putting them on my dropbox (and with their fairly strong recovery capabilities - I am not worried about losing it) and thereafter I can use the duplicati restore-from-configuration file function.
tested and do get some warnings but nothing fatal - so happy enough to not dig in further to the parameters.
That being said, this approach does need a database rebuild which seems to be an extremely time costly step - i thought I saw elsewhere, is there an option to backup database as well?
188.8.131.52 canary makes it better due to fix for Empty source file can make Recreate download all dblock files fruitlessly with huge delay #3747 linked earlier, and canary seems hoped to lead to next beta soon
The option exists but it’s do-it-yourself for now.
Would it be a good idea of Duplicati would also put the local database (encrypted) on the remote location?.
I am running everything on the canary versions - the database step is still really painfully slow. Too bad I am not an real programmer (and I have a day job); I wouldn’t mind supporting some dev efforts in building in for the database backup as an option.