So after a DB problem (Duplicati say that’s a problem with a file and the repair doesn’t resolve) I used the delete and recreate options…
the DB was 9 GB (5TB of data) and now is running from more than 2 days and isn’t half completion…
suggestion?
for future, how can I make a backup of a DB after a job completition?
thx
You could use the --run-script-after (or --run-script-before command to run a script which copies %DUPLICATI__dbpath% to %DUPLICATI_dbpath%.old (or something like that)…
Well, it varies depending on things like what OS you’re using but for Windows you could do the following:
Put this in a file (we’ll call it C:\Duplicati-backupDB.bat)
@echo off
if /i not "%DUPLICATI__OPERATIONNAME%"=="BACKUP" exit 0
copy "%DUPLICATI__dbpath%" "%DUPLICATI__dbpath%.old"
Select the run-script-after entry from the “Add advanced option” selector on step 5 (Options) of editing your backup job
Put C:\Duplicati-backupDB.bat in the run-script-after field
Click the “Save” button
Now whenever the job runs it will copy your current (just finished being updated) .sqlite file to a “.old” file in the same folder.
Of course you can do other things in the batch file such as:
check the “success” variable before copying the DB (if you care)
copy the DB somewhere else (like to a USB drive if you’re backing up to one)
compress the DB (they can get big so having multiple copies could get “painful”) - note that this could cause your backup job to appear to run longer due to the time spend compressing
Yep, that works too. I think I saw another user talking about setting up a 2nd backup job with the sole purpose of backing up the 1st job’s database file.
If you’re storing in the cloud, that makes sense as the de-duplication will cut down on data transfer. The drawback is that if you end up needing to restore the 1st job you’ll first have to restore the 2nd job to get the DB to then use on the 1st job.
It’s not a big deal - having a small backup of just the database file(s) should restore very quickly, but it’s another step to keep in mind in case of disaster recovery.
I haven’t look specifically at the database recreate process but my guess is there’s just a lot of database processing going on. Remember, this process isn’t just restoring a file from a remote destination - it’s reading through every index file on the destination and adding the entire history of every file contents in 100k (by default) blocks of data.
With a 12TB data source chopped into 100k blocks that’s 128.8 million block records being recreated in the local database for the initial backup. Then add to that however many blocks changed with each new file or historical version of an existing file.
Once some of the more commonly used functionality has been updated and finalized (like the pending improvement in restore browsing speed) I’m hoping time can be found to review the database recreate process for performance (as well as reporting) improvements.
My guess is that one a recreate is started a database will have been created at which point a Repair should be enough to continue from any interruption point, but @kenkendk would know better.
Edit: Note that my guess is WRONG and it appears once a “The database was attempted repaired, but the repair did not complete.” message is received a Recreate action is necessary to recover.
@Bazzu85, have a little patience - kenkendk generally only has time to hit the forums once or twice a week.
@drwtsn32, I don’t know of any list but pretty much any parameter is exposed with a DUPLICATI__ prefix (yes, two ending underscores) and any “-” replaced with “_”.
I ended up putting a ‘set > filename’ in my post-backup script so I could see all the values. here they are in case others are curious. (Some values have been redacted.) This was from version 2.0.2.1.
However, I should note that the list generated from set won’t include ones with no value, so what you got is probably everything you have values for in the particular job on which you ran the set command.
So there are a lot of optional/advanced/less-commonly-used parameters not represented in the above list.