Saving DUPLICATI activity database

I just restored the Pi I use for Duplicati from a SD clone made a week ago and then realized that the historical Duplicati backup information was old/out-of-date. Is there a way to periodically save that database? And, then, of course, to restore it?..Thanks


if you want to do that (not advisable because it can lead to discrepancies between your database and the backend), stop Duplicati and copy the *.sqlite files to any destination you want and after that, restart Duplicati.

GPatel…OK, but I can’t be the first one to have the OS crash. How does one recover backups in this situation?

My plan was to have an CRON job which would copy the SQL files after each Duplicati operation. On LINUX is Duplicati a service which can be stopped/started from the command line?

I’m VERY interested in your thoughts….RDK

Exactly in the same way that the gazillions of backup software that are caching information in a local database do it: by rebuilding it from the backend. Actually, Duplicati can restore without explicitly rebuilding the Db, however it is rebuilding a temporary database automatically. You can rebuild it explicitly and do the restore from files from the UI.
Granted, if you can be certain that after a crash you have an up-to-date database, you can save the time necessary to do this rebuilding, and, theoretically, restore the database and start from here. What would you win ? rebuild duration could depend on many things, the backend data, it’s complexity (lot of files, block size), disk performance everything. From what I see, you could expect to win one hour or 1h30 vs 15-20 hours of restore time for a not so minimal data size. Is it worth the hassle ? Difficult to say, you have to factor the crash probability. If your saved database is not up to date, you could also have some pain at restore time.

on systems supporting systemd:

systemctl (stop,start) duplicati

Agreed, and this makes it likely that attempted backup will complain about files not in any DB records. Running Repair will happily delete the new unknown files, and that’s the death of the newer backups.

duplicati- just destoryed one month worth of backup #4579 has some thoughts about how the mismatch could be detected and handled, but it needs developer volunteers. Because there is a big lack of developer volunteers (any takers?) all support can do is advise people.

You can probably just use a run-script-after on the backup job, but depending on your upload speed, uploading the database may take longer than uploading the backup which only uploads file changes.

describes writing scripts and what is available, e.g. from environment. Here’s one for a job database:


Some people who used to copy databases stopped doing that, so run a recreate as needed (which admittedly takes time in best case, but worse cases might download whole backup or not succeed).

@gpatel-fr …OK, but your response is not very comforting to me as a user of this software. And now after reading this reference (duplicati- just destoryed one month worth of backup · Issue #4579 · duplicati/duplicati · GitHub) in ts678’s reply I both confused and worried.

Can you please be more specific (ie detailed) with the “CORRECT” process for recovering from a system crash without losing all of the recovery backup files.

On the recommendation of a friend, I’m using this system to backup files from three different computers. Until now he recommendation sounded like the ideal solution….RDK

The documentation is available at:

the last chapter is called “disaster recovery”.

Ideal solutions don’t exist. You must decide what level of risk you want to accept, and do accordingly.

Typically for serious backing up, best practice is multiple backups, at least one offsite and one onsite, possibly using different software. A time to recovery need is also a factor that can guide the direction.

You’re trading off the variables, including whether or not a stale backup is good enough if newer fails. Personal backups can sometimes stand this less-than-ideal situation. Business accounts might not… Serious backups get tested in various scenarios, for example whether a restore works when needed.

Good practices for well-maintained backups asked a great question, and there are lots of ideas there. Possibly you won’t want to do all, because it can be a lot, but how far you go depends on your needs.

shows that you already use multiple methods although I’m not clear on exactly what that’s backed up.
If that did the basic recovery and fine-tuning it with latest data is desired but not essential, I’d suggest following the cited disaster recovery method, or maybe a variation which is putting back configuration, using your Export To File which you saved somewhere safe, or using some guesswork if there’s none. Regular database recreate from destination will (unlike Direct restore from backup files) leave you the permanent local database if the goal was to just keep going on new drive with prior historical backups.

Everybody’s needs are different. Some (perhaps for business) have more time-critical recovery needs.
Some may have more critical reliability needs. If so, add backups and methods in case one falls short.

If you want to avoid destroying new backups, avoid the mistake of running repair with an old database, which is pretty much the only way one can get Duplicati to do that, and it’s generally user work unless auto-cleanup is set AND you’ve put back the stale database from an image restore which opens a risk.

Basically anything other than in paragraph above, but “CORRECT” does not mean guaranteed to work. Recreating the database may be faster, slow, or slow-then-fail, but it doesn’t risk the files at destination.