Hi,
I’ve increased the blocksize during backup. Do I need to set the blocksize explicitly when creating a restore job (temporary database, direct restore mode)?
Hi,
I’ve increased the blocksize during backup. Do I need to set the blocksize explicitly when creating a restore job (temporary database, direct restore mode)?
well, when you read the documentation, let alone what is saying the Web UI, you see:
--blocksize = 100kb
(...)
Note that the value cannot be changed after remote files are created.
if that’s not clear enough, it is meaning that when you change it, you should create a new backup.
if you only ever did one backup, it’s certainly setup with the original block size.
it should work without setting a block size.
You should not do that unless what you want is to risk your backup data. Having 2 databases targetting the same data files seems something NOT to do. Restoring to another computer is done normally when the original computer is dead.
For recovery from a failed computer/disk, you need an export of your config - not saved on the original computer of course. Then on the new computer you create a new job by importing the json file.
Before saving the new job, change it to disable any automatic backup since starting a backup while your new computer is still empty of any data would be a bad move
Then after saving the new job, you recreate the database. If your backups are not messed up, it should be relatively fast since Duplicati is backing up information for this very purpose. Then you restore data from this new config.
To what? Historical rule of thumb has been to try to not track more than a few million blocks.
That would suggest blocksize of 1 MB or so before the database and SQL become too slow.
You can move the database to any folder you like. Is that drive also the backup’s destination?
Sometimes keeping the database with the files it tracks is a way to handle drive rotation plan.
It’s also a no-DB-rebuild-waiting-required way to get a restore going if original system breaks.
If you’re talking about a backup (not the “live” location), the path is in an environment variable
DUPLICATI__dbpath=<whatever> which you can use in a run-script-after to run the DB copy.
We’ve already discussed how trying to put stale DB copies back in use can be catastrophic…
Large can do it. Too small a blocksize for its size can do it. Problems with backup can also do it.
Typically when it goes looking for data, the progress bar is in the final 10%. Did yours get there?
You can also see About → Show log → Live → Verbose. Download of dblock files is a bad sign.
This is probably the best way to get back in business. The DB rebuild might be a little slower, and potentially troublesome compared to Direct restore from backup files, but you run it once.
If that doesn’t work because the config wasn’t saved, you still restore your data, but not the config. Sometimes browsing the restore will give you some clues about how to re-enter the config though.
I’m not certain how the direct restore code gets blocksize, but one guess is the manifest in the zip:
{“Version”:2,“Created”:“20210620T130850Z”,“Encoding”:“utf8”,“Blocksize”:204800,“BlockHash”:“SHA256”,“FileHash”:“SHA256”,“AppVersion”:“2.0.5.114”}
I wouldn’t bother backing up the large, job-specific databases. Those can be recreated from the back end data, although a bug in how some older versions of Duplicati wrote dindex files can make that process painful. (There is a way to proactively fix that situation before you need to do a database recreate.)
Backing up the Duplicati-server.sqlite may be quite useful as it contains job definitions and other settings. Note that it contains credentials to back end storage so treat the file accordingly. I would just make a copy of it and store somewhere securely, don’t back it up with Duplicati itself. You can repeat the backup if you make any config changes.
Alternatively, you could export the job configurations to json format and store in a secure location. That’s what I actually do instead of backing up Duplicati-server.sqlite.
No. These are also kept in Duplicati-server.sqlite.
Options are in the server DB Duplicati-server.sqlite. This is in TargetURL column of Backup table.
Unless bash reads databases (I doubt that) you would have to run something from bash that does.
Command Line Shell For SQLite may be one, but there are many. Few do encryption, so you’d set
–unencrypted-database so that you (and malware) will have an easier time getting desired access.
You can do much better if you use the target URL direct from Duplicati for a script run after backup.
Because uploading and downloading large databases is not instant, you might try timing against a DB recreate when the backup is not corrupted, meaning either try the proposed fix or time a fresh backup. Repeat periodically (you test restores anyway I hope) to make sure it doesn’t go into dblock download.
All feature requests (and the existence of Duplicati) depend on volunteers. There is a severe shortage.
Anybody who is able to contribute to code, test, documentation, forum, GitHub support, etc. is needed.