Duplicati-cli repair operation stalls

I’ve been working on switching some of my jobs from a normal server instance of duplicati, to using my duplicati-in-docker image.

One job is outputing a list of missing files when I run the backup job.

Found 39 files that are missing from the remote storage, please run repair
Fatal error => Found 39 files that are missing from the remote storage, please run repair

ErrorID: MissingRemoteFiles
Found 39 files that are missing from the remote storage, please run repair

So I am trying to run the repair operation.

Unfortunately, the job just stalls. I just had to forcibly stop it after over 80 hours.

To see if I could figure out what is causing that, I added the --debug-output and --debug-retry-errors flags. All I get is this:

Starting repair at Tue Feb  5 18:58:51 UTC 2019.
Input command: repair
Input arguments: 
    googledrive uri

Input options: 
backup-name: name of backup
dbpath: /data/duplicati/nameofdb.sqlite
passphrase: < password >
send-mail-from: email
send-mail-level: Warning,Error,Fatal
send-mail-password: email pass
send-mail-to: email
send-mail-url: smtpuri
send-mail-username: username
disable-module: console-password-input
rebuild-missing-dblock-files: 
debug-output: true
debug-retry-errors: true

  Listing remote folder ...

The Listing remote folder ... line is where the previous jobs have stalled.

It shouldn’t be network connection related. This is on a vm running with a 10G network connected to fiber. I don’t think there’s much between our datacenter and the backbone…

I think I’ll try increasing the log-level next, but is there anything else I can try?

Tried the console-log-level flag. Last few lines changed to:

The operation Repair has started
Backend event: List - Started:  ()
  Listing remote folder ...
Backend event: List - Completed:  (833 bytes)

And now it’s stalled again.

htop shows the job is using cpu and memory.

–console-log-level=Profiling should be about as noisy as it will get, and a lot noisier than the given clipping. This should show details of most SQL operations, and all backend operations. What storage type is this?

I was hoping to avoid that level of logging… Heh.

The job I’m running is this (sanitized):

docker run \
     --rm \
     --name repair-job \
     --hostname repair-job \
     --memory 1024m \
     --memory-swap="1536m" \
     --cpus="0.75" \
     --volume duplicati_data_storage:/data/duplicati \
     --volume /path/to/data/on/nfs/mount/:/path/in/container/:ro \
     --volume /opt/databasedumps:/opt/databasedumps:rw \
     --volume /var/www/app:/var/www/app:ro \
     --volume /root/scripts:/root/scripts:ro \
     duplicat-in-docker-image \
       duplicati-cli repair googledriveuri \
         --backup-name="blah" \
         --dbpath=/data/duplicati/NamedDB.sqlite \
         --passphrase=password \
         --disable-module=console-password-input \
         --rebuild-missing-dblock-files \
         --debug-output=true \
         --debug-retry-errors=true \
         --console-log-level=profiling

I forked David Reagan / Duplicati-In-Docker · GitLab to my workplaces internal gitlab instance. (Yes, the original project is mine as well.) So that’s the image I’m running.

The sqlite db is living on an nfs mount. So are data uploads. Code is local to the vm.

I believe the SAN the nfs mount is on is the same one vmware (an esix cluster) uses for the vm’s drives. So I’d hope that would not be the bottleneck.

Is there anything in the profiling level of logging that would be sensitive? What I’m seeing mostly looks like hashes. No file names.

Once I’m sure the job is fully stalled, I’ll post some output.

1.0 File Locking And Concurrency In SQLite Version 3 advises against that. Might work out. I don’t know.

I think you might get file names. Even Verbose level shows them during scan, and Profiling shows more. Possibly repair won’t give names, but I know my backup writes source file names in my Profiling log e.g.:

2018-11-30 08:39:48 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\BackThisUp\sub2\sub2file1.txt

I could see file locking be an issue if multiple servers were using the db, but this is the only process using it. But, my knowledge, of nfs and that level of things barely touches the surface, so I could be wrong…

I suppose I could try moving it to a local partition, just to see.

Anyway, since I last posted, this is as far as it has gotten:

The operation Repair has started
Starting - Running Repair
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549497474); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549497474); SELECT last_insert_rowid(); took 0:00:00:00.010
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549497475); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549497475); SELECT last_insert_rowid(); took 0:00:00:00.007
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.002
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.002
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.001
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" ))
ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" )) took 0:00:00:00.258
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "File" GROUP BY "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "File" GROUP BY "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1 took 0:00:00:03.607
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1)
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1) took 0:00:00:00.095
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual")
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 0:00:00:03.760
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549497482); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549497482); SELECT last_insert_rowid(); took 0:00:00:00.007
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.005
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.002
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400
ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400 took 0:00:00:47.291
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.038
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.001
Starting - RemoteOperationList
Backend event: List - Started:  ()
  Listing remote folder ...
Backend event: List - Completed:  (833 bytes)
RemoteOperationList took 0:00:00:18.505
Starting - ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting")
ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting") took 0:00:00:00.060
Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "MissingBlocks-HASHHASHHASH" ("Hash" TEXT NOT NULL, "Size" INTEGER NOT NULL, "Restored" INTEGER NOT NULL) 
ExecuteNonQuery: CREATE TEMPORARY TABLE "MissingBlocks-HASHHASHHASH" ("Hash" TEXT NOT NULL, "Size" INTEGER NOT NULL, "Restored" INTEGER NOT NULL)  took 0:00:00:00.000
Starting - ExecuteNonQuery: INSERT INTO "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") SELECT DISTINCT "Block"."Hash", "Block"."Size", 0 AS "Restored" FROM "Block","Remotevolume" WHERE "Block"."VolumeID" = "Remotevolume"."ID" AND "Remotevolume"."Name" = "duplicati-b71ee529958e444b58e488870c9db1c18.dblock.zip.aes" 
ExecuteNonQuery: INSERT INTO "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") SELECT DISTINCT "Block"."Hash", "Block"."Size", 0 AS "Restored" FROM "Block","Remotevolume" WHERE "Block"."VolumeID" = "Remotevolume"."ID" AND "Remotevolume"."Name" = "duplicati-b71ee529958e444b58e488870c9db1c18.dblock.zip.aes"  took 0:00:00:00.006
Starting - ExecuteNonQuery: CREATE UNIQUE INDEX "MissingBlocks-HASHHASHHASH-Ix" ON "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored")
ExecuteNonQuery: CREATE UNIQUE INDEX "MissingBlocks-HASHHASHHASH-Ix" ON "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") took 0:00:00:00.000
Starting - ExecuteReader: SELECT DISTINCT "MissingBlocks-HASHHASHHASH"."Hash", "MissingBlocks-HASHHASHHASH"."Size", "File"."Path", "BlocksetEntry"."Index" * 102400 FROM  "MissingBlocks-HASHHASHHASH", "Block", "BlocksetEntry", "File" WHERE "File"."BlocksetID" = "BlocksetEntry"."BlocksetID" AND "Block"."ID" = "BlocksetEntry"."BlockID" AND "MissingBlocks-HASHHASHHASH"."Hash" = "Block"."Hash" AND "MissingBlocks-HASHHASHHASH"."Size" = "Block"."Size" AND "MissingBlocks-HASHHASHHASH"."Restored" = 0 
ExecuteReader: SELECT DISTINCT "MissingBlocks-HASHHASHHASH"."Hash", "MissingBlocks-HASHHASHHASH"."Size", "File"."Path", "BlocksetEntry"."Index" * 102400 FROM  "MissingBlocks-HASHHASHHASH", "Block", "BlocksetEntry", "File" WHERE "File"."BlocksetID" = "BlocksetEntry"."BlocksetID" AND "Block"."ID" = "BlocksetEntry"."BlockID" AND "MissingBlocks-HASHHASHHASH"."Hash" = "Block"."Hash" AND "MissingBlocks-HASHHASHHASH"."Size" = "Block"."Size" AND "MissingBlocks-HASHHASHHASH"."Restored" = 0  took 0:00:00:00.009

Could you check About on Duplicati’s main screen to get the version of Duplicati? As a long shot, there was a change in 2.0.3.10 – " Removed automatic attempts to rebuild dblock files as it is slow and rarely finds all the missing pieces (can be enabled with --rebuild-missing-dblock-files ).". The log looks like it created the missing block list, and an older Duplicati might be going through local and remote files trying to recreate a dblock.

There’s also a recent issue Purge Broken Files Running for 36 Hours where things stalled so tightly that strace wasn’t even showing system calls. Mono was updated just in case it helped, but it didn’t help this problem. Possibly you could try strace, ltrace, sar, and other tools to look beyond just CPU and memory.

Hi jerrac,

I’m wondering what the differences between your original server platform and the docker are. Was the server platform Windows by any chance? I have a stalling/hung issue (killed at over 36 hours for rebuild or purge on 8GB source and 16GB target) as well but when I move the job, database, and backup store to a Windows machine it just blazes through it. Just a shot in the dark as it might be similar to my issue, depending.

The docker image is based on CentOS and has Duplicati 2.0.4.5 installed.

I do have the --rebuild-missing-dblock-files flag set. I don’t remember the details, but a previous attempt ended with a message saying I should add that flag.

The history of the job is a bit complicated. It started as a normal server instance running on Ubuntu 14.04. Since 14.04 is EOL in April, I’ve been building new vm’s to replace the old ones. So the new vm that this is running on is Ubuntu 18.04.

In an effort to have better logging that what Duplicati provides, I’ve been running my jobs via gitlab-runner. That goal was actually met since I was just able to go look at the history of this…

For this specific job, I have had a repair operation succeed. Unfortunately, the backup run failed. Errors about missing dblock and dindex files, it also had “Found 3 remote files that are not recorded in local storage, please run repair” at the end.

The repair job after that ended with:

The backup storage destination is missing data files. You can either enable `--rebuild-missing-dblock-files` or run the purge command to remove these files. The following files are missing:  < list of files here >

I did try moving the db off of nfs. Doesn’t seem to have helped.

The operation Repair has started
Starting - Running Repair
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581095); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581095); SELECT last_insert_rowid(); took 0:00:00:00.020
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581096); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581096); SELECT last_insert_rowid(); took 0:00:00:00.011
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" ))
ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" )) took 0:00:00:01.400
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "File" GROUP BY "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "File" GROUP BY "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1 took 0:00:00:10.480
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1)
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1) took 0:00:00:00.170
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual")
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 0:00:00:01.176
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581109); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581109); SELECT last_insert_rowid(); took 0:00:00:00.011
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400
ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400 took 0:00:00:14.947
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
Starting - RemoteOperationList
Backend event: List - Started:  ()
  Listing remote folder ...
Backend event: List - Completed:  (833 bytes)
RemoteOperationList took 0:00:00:23.087
Starting - ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting")
ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting") took 0:00:00:00.001
Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "MissingBlocks-HASHHASHHASH" ("Hash" TEXT NOT NULL, "Size" INTEGER NOT NULL, "Restored" INTEGER NOT NULL) 
ExecuteNonQuery: CREATE TEMPORARY TABLE "MissingBlocks-HASHHASHHASH" ("Hash" TEXT NOT NULL, "Size" INTEGER NOT NULL, "Restored" INTEGER NOT NULL)  took 0:00:00:00.024
Starting - ExecuteNonQuery: INSERT INTO "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") SELECT DISTINCT "Block"."Hash", "Block"."Size", 0 AS "Restored" FROM "Block","Remotevolume" WHERE "Block"."VolumeID" = "Remotevolume"."ID" AND "Remotevolume"."Name" = "duplicati-b71ee529958e444b58e488870c9db1c18.dblock.zip.aes" 
ExecuteNonQuery: INSERT INTO "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") SELECT DISTINCT "Block"."Hash", "Block"."Size", 0 AS "Restored" FROM "Block","Remotevolume" WHERE "Block"."VolumeID" = "Remotevolume"."ID" AND "Remotevolume"."Name" = "duplicati-b71ee529958e444b58e488870c9db1c18.dblock.zip.aes"  took 0:00:00:00.005
Starting - ExecuteNonQuery: CREATE UNIQUE INDEX "MissingBlocks-HASHHASHHASH-Ix" ON "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored")
ExecuteNonQuery: CREATE UNIQUE INDEX "MissingBlocks-HASHHASHHASH-Ix" ON "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") took 0:00:00:00.000
Starting - ExecuteReader: SELECT DISTINCT "MissingBlocks-HASHHASHHASH"."Hash", "MissingBlocks-HASHHASHHASH"."Size", "File"."Path", "BlocksetEntry"."Index" * 102400 FROM  "MissingBlocks-HASHHASHHASH", "Block", "BlocksetEntry", "File" WHERE "File"."BlocksetID" = "BlocksetEntry"."BlocksetID" AND "Block"."ID" = "BlocksetEntry"."BlockID" AND "MissingBlocks-HASHHASHHASH"."Hash" = "Block"."Hash" AND "MissingBlocks-HASHHASHHASH"."Size" = "Block"."Size" AND "MissingBlocks-HASHHASHHASH"."Restored" = 0 
ExecuteReader: SELECT DISTINCT "MissingBlocks-HASHHASHHASH"."Hash", "MissingBlocks-HASHHASHHASH"."Size", "File"."Path", "BlocksetEntry"."Index" * 102400 FROM  "MissingBlocks-HASHHASHHASH", "Block", "BlocksetEntry", "File" WHERE "File"."BlocksetID" = "BlocksetEntry"."BlocksetID" AND "Block"."ID" = "BlocksetEntry"."BlockID" AND "MissingBlocks-HASHHASHHASH"."Hash" = "Block"."Hash" AND "MissingBlocks-HASHHASHHASH"."Size" = "Block"."Size" AND "MissingBlocks-HASHHASHHASH"."Restored" = 0  took 0:00:00:00.001

It’s been stuck there since yesterday afternoon.

Hmmm… Maybe I’ll make a copy of the target data and do a purge broken files.