How to recreate a backup job

I am aware that some users report poor (read: ridiculous) performance for recreating the database. The performance depends on multiple factors, such as disk type, number of files etc.

I have it as a priority item, but there are only so many hours in a day …

Should you need to get the files out, it is possible to restore data without building the database. There is a tool bundled with Duplicati called Duplicati.CommandLine.RecoveryTool.exe which will do it for you. There is also a Python implementation of it, should you want to restore without Mono installed.

All the errors mentions ThreadAbortException which usually happens if you force abort the job. I assume this is what you did.

Can you try to do the database recreate from the commandline?

It should be something like this:

mono Duplicati.CommandLine.exe repair b2://folder?authid=xyz --dbpath=/newdb.sqlite --verbose

This way it is easier to diagnose what goes wrong. You just need to make sure that you point to a non-existing file with --dbpath and Duplicati will build the database in that file.

OK, I’m running from the command line.

It has downloaded a bunch of files, and here is a snippet of the output:

Downloading file (62.53 KB) …
Downloading file (19.26 KB) …
Downloading file (88.67 KB) …
Downloading file (17.97 KB) …
Downloading file (18.09 KB) …
Downloading file (62.43 KB) …
Downloading file (18.23 KB) …
Downloading file (36.87 KB) …
Downloading file (33.76 KB) …
Downloading file (17.97 KB) …
Downloading file (34.00 KB) …
Processing required 5 blocklist volumes: duplicati-b19ac4d4f8fb04310ac683f3d78f1a59e.dblock.zip.aes, duplicati-b2b07d78f62fc421eac6edabe008b7cd0.dblock.zip.aes, duplicati-b40a708025aff44a6a3d6f01009fd83a7.dblock.zip.aes, duplicati-ba896a6d741804561975ccf4fa0673037.dblock.zip.aes, duplicati-bdcadd8c4aab84469bc8e9729f19421d2.dblock.zip.aes
Downloading file (1.61 MB) …
Downloading file (50.08 MB) …
Downloading file (49.99 MB) …
Downloading file (50.02 MB) …
Downloading file (50.07 MB) …
Processing all of the 3536 volumes for blocklists: duplicati-b000a9e62beba4cda9f9de9b4c063b477.dblock.zip.aes, duplicati-b004f9199370a45fe8705d2a1c579c787.dblock.zip.aes, duplicati-b008edb18c42e4510b5844a3377ef

etc… for a load of zip.aes files, then it starts downloading again:

Downloading file (49.99 MB) …
Downloading file (49.98 MB) …
Downloading file (49.96 MB) …
Downloading file (49.93 MB) …
Downloading file (49.97 MB) …
Downloading file (49.99 MB) …
Downloading file (49.97 MB) …
Downloading file (49.99 MB) …
Downloading file (49.95 MB) …
Downloading file (49.91 MB) …
Downloading file (49.99 MB) …
Downloading file (49.99 MB) …
Downloading file (49.99 MB) …
Downloading file (49.99 MB) …
Downloading file (50.00 MB) …
Downloading file (49.94 MB) …
Downloading file (49.91 MB) …
Downloading file (49.99 MB) …
Downloading file (49.91 MB) …
Downloading file (49.94 MB) …

and it’s frozen up here. mono is using varying amounts of CPU from 90 to 100%, randomly.

I stopped the process, and then this was displayed:

mono_os_mutex_lock: pthread_mutex_lock failed with “Invalid argument” (22)
Abort

Not sure if that’s because i stopped the process or not.

EDIT:

I’ve started this again, this time logging to a file, and will leave it running for a few days to see what happens.

OK, I have checked whats in the logs.

It’s currently doing this:

Processing all of the 3536 volumes for blocklists: duplicati-b000a9e62beba4cda9f9de9b4c063b477.dblock.zip.aes, duplicati-b004f9199370a45fe8705d2a1c579c787.dblock.zip.aes
etc.
etc. for a bunch of simular lines, then:

Downloading file (49.99 MB) …
Downloading file (49.98 MB) …
Downloading file (49.96 MB) …
Downloading file (49.93 MB) …
Downloading file (49.97 MB) …

etc… over and over.

It’s still logging, it appears to download a file every 5 mins or so, CPU at 100%.

Above it said “Processing all of the 3536 volumes for blocklists”, does that mean it is downloading 3536 files, each at ~50MB each??? That’s about 176GB, the whole backup.

Yes, it is now downloading everything, because it is missing some information that it expected to find in the dindex files, but failed to find for some reason. Since there is no way of knowing which of the dblock files has that information, it just keeps running through them until it finds what it needs.

I figured that’s what it must be doing. Why does it take such a long time to download and process the files? Its going to take 12 days to download everything at its current speed.

Is there any way to instruct the process to retry files forever?

My internet is not that great at the moment, and the database rebuild keeps aborting due to no internet…

Operation Get with file duplicati-b2923cce79af542119457b250e895f393.dblock.zip.aes attempt 1 of 5 failed with message: Error: NameResolutionFailure => Error: NameResolutionFailure
Downloading file (15.88 MB) …
Operation Get with file duplicati-b2923cce79af542119457b250e895f393.dblock.zip.aes attempt 2 of 5 failed with message: Error: ConnectFailure (Connection timed out) => Error: ConnectFailure (Connection timed out)
Downloading file (15.88 MB) …
Operation Get with file duplicati-b2923cce79af542119457b250e895f393.dblock.zip.aes attempt 3 of 5 failed with message: Error: ConnectFailure (Connection timed out) => Error: ConnectFailure (Connection timed out)
Downloading file (15.88 MB) …
Operation Get with file duplicati-b2923cce79af542119457b250e895f393.dblock.zip.aes attempt 4 of 5 failed with message: Error: ConnectFailure (Connection timed out) => Error: ConnectFailure (Connection timed out)
Downloading file (15.88 MB) …
Operation Get with file duplicati-b2923cce79af542119457b250e895f393.dblock.zip.aes attempt 5 of 5 failed with message: Error: ConnectFailure (Connection timed out) => Error: ConnectFailure (Connection timed out)

You could try increasing --retry-count and --retry-delay from the defaults (5 and 10 sec, I think), they might do what you need.

(Edit: Sorry about that bad parameter - stupid auto-correct!)