Cannot repair database; unclear how to proceed

Hello all, please advise:

Running on Ubuntu Xenial, backing up to a remote SSH server. Have actually been running Duplicati 2.x on Ubuntu for 2-3 years now without many problems. That is, until I wasn’t paying attention and allowed my root partition to fill up on the client. Database gets corrupted, and I’ve been stuck at a cannot repair mode ever since.

From the GUI, I’ve tried deleting and recreating the database. It runs for about 24 hours, then eventually fails.

From the CLI, I can barely even start the process:

root@r# duplicati-cli repair “ssh://HOST:22//backup/?auth-username=bla&auth-password=bla&ssh-keyfile=bla” --dbpath="/root/.config/Duplicati/RWFOPCGNWN.sqlite" --passphrase=bla
Listing remote folder …
Downloading file (1.61 MB) …

… And there it hangs. The GUI repair/recreate at least gets a bit further as my db.sqlite file does get rebuilt (~600MB before it hangs and fails), but my cli repair attempts stop at downloading that first remote file. I’m obviously flubbing the command line parameters somehow but the application isn’t telling me what and just sits at “downloading file …” forever.

Ah and one more thing - I realize I am not running the latest version of Duplicati. This is because it looks like anything past this version requires a newer version of mono-runtime than what Ubuntu 16.04 comes packaged with? Before I go mucking with the rest of the OS I want to make sure the problem isn’t something with my Duplicati config/settings.


Welcome to the forum @jim would perhaps have less. It’s much better at not breaking, but just a bit better at repairs.
Most of the improvement is in speed, so the slow recreate that failed might not be quite as slow.

Was there a progress bar on the Recreate? If so, did you notice where it was, and its behavior?
Everything past 70% is large dblock file downloads. The last 10% is an often slow final search. does better at doing only the earlier faster downloads of smaller dlist and dindex files.
The mono project download link for 16.04 LTS is on this page. Personally, I use mono-complete.

What sort of message does the final failure give?

The easiest way to get all the right parameters is change an Export As Command-line to a repair.
Using the Command line tools from within the Graphical User Interface also carries all options in.
Neither is guaranteed to work better than the GUI buttons though, as it’s similar code underneath.

CLI perils were covered. For above, you could try posting a job Export with redactions, however knowing the GUI behavior might be more useful, e.g. progress bar info and final failure message.
Repair should almost always be happy to try with whatever settings the Backup was happy with.

If you want to get a better view of the hang, you can use About --> Show log --> Live --> Retry for starters (there are other levels possible), or use –log-file and –log-file-log-level. That should show network-level (e.g. download) work plus general status. But your final error would also give clues.

Ah, thank you so much for the reply! Yes the progress bar makes it to the final 10% (I’d say even further than that, < 5%) but then errors out. Maybe even makes it all the way through? But that last 10% or so takes a while and then ends with a failure message. I don’t remember the exact msg but I did it 2x and just remember it being not particularly memorable/informative. So just something like “repair failed” or something that effect.

I’m in the process of creating a new backup [exported job, edited JSON to change name/db/path/etc.] only because it’s been a couple of weeks since I’ve got a good backup so I just want to make sure the data’s preserved. After that I’ll attempt the repair again and get the exact errors to post here. Will also try the tips about live log and CLI as well, thank you again!

If it didn’t make it, there might have been a download error or a bad file on the remote destination.
If it made it, it might have finished the exhaustive search, still have had missing data, and errored.

Sometimes messages are short and you have to go to About --> Show log --> Live or your log file.
Whether or not you saw a shorty or something big but incomprehensible (it happens…) isn’t clear.

That sounds like my plan, which got done too much on Start a new, mess with old later.
Works well if space and bandwidth allow, and it allows a good failure analysis and maybe a bug fix.

If it turns out that the Recreate just won’t go, the old remote backup might still be useful for use by Duplicati.CommandLine.RecoveryTool.exe although that’s not as nice as a successful DB recreate.

Update: I am happy to report I was able to rebuild my database! I’ll post my results here in case anybody else runs into this problem:

After generating a known-good backup I went back to try and recreate my old db. Exported my job as commandline as you recommended and had to take out an extra parameter but otherwise left it as-is. Same results, it got to “Downloading Files” and stuck at that 1st file again.

At this point I switched to the GUI, as it GUI repair had generally worked, but it had stopped working as well! Stuck at what appears to be the same place, never successfully downloading that 1st file.

So at this point I throw in the towel and pointed to the Mono repos and upgraded to v6.8.0.105. This allowed me to run Duplicati and restart a db repair via the GUI. Took a few days to download the ~1300 blocklist volumes, which gave me time to open a few windows to watch the live log at Profiling and Verbose levels.

It gets to the VERY end, and finally throws an error! 1 missing filelist, try running list-broke-files or purge-broken-files… I’d spent enough time on this so I went straight to PURGE. To my surprise it ran relatively quickly, only about maybe 20 mins or so, and in the logs I see, “Purge completed, and consistency checks completed, marking database as complete”.

The culprit? It was that 1st bloody file that kept getting hung up on! It was the oldest file in the set, and apparently was the cause of all my problems! ARGH! I don’t know what happened, but I’ve since run a couple more backups and everything is working as before.

@ts678 thank you so much for your help. Your list of tips and tricks were very helpful, definitely helped me understand what was going on a lot better. Cheers!