The docker image is based on CentOS and has Duplicati 2.0.4.5 installed.
I do have the --rebuild-missing-dblock-files
flag set. I don’t remember the details, but a previous attempt ended with a message saying I should add that flag.
The history of the job is a bit complicated. It started as a normal server instance running on Ubuntu 14.04. Since 14.04 is EOL in April, I’ve been building new vm’s to replace the old ones. So the new vm that this is running on is Ubuntu 18.04.
In an effort to have better logging that what Duplicati provides, I’ve been running my jobs via gitlab-runner. That goal was actually met since I was just able to go look at the history of this…
For this specific job, I have had a repair operation succeed. Unfortunately, the backup run failed. Errors about missing dblock and dindex files, it also had “Found 3 remote files that are not recorded in local storage, please run repair” at the end.
The repair job after that ended with:
The backup storage destination is missing data files. You can either enable `--rebuild-missing-dblock-files` or run the purge command to remove these files. The following files are missing: < list of files here >
I did try moving the db off of nfs. Doesn’t seem to have helped.
The operation Repair has started
Starting - Running Repair
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581095); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581095); SELECT last_insert_rowid(); took 0:00:00:00.020
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581096); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581096); SELECT last_insert_rowid(); took 0:00:00:00.011
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" ))
ExecuteScalarInt64: SELECT COUNT(*) FROM ( SELECT DISTINCT c1 FROM (SELECT COUNT(*) AS "C1" FROM (SELECT DISTINCT "BlocksetID" FROM "Metadataset") UNION SELECT COUNT(*) AS "C1" FROM "Metadataset" )) took 0:00:00:01.400
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "File" GROUP BY "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "Path", "BlocksetID", "MetadataID", COUNT(*) as "Duplicates" FROM "File" GROUP BY "Path", "BlocksetID", "MetadataID") WHERE "Duplicates" > 1 took 0:00:00:10.480
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1)
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "BlocksetID", "Index", COUNT(*) AS "EC" FROM "BlocklistHash" GROUP BY "BlocksetID", "Index") WHERE "EC" > 1) took 0:00:00:00.170
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual")
ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 0:00:00:01.176
Starting - ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581109); SELECT last_insert_rowid();
ExecuteScalarInt64: INSERT INTO "Operation" ("Description", "Timestamp") VALUES ("Repair", 1549581109); SELECT last_insert_rowid(); took 0:00:00:00.011
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000
Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400
ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400 took 0:00:00:14.947
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000
Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration"
ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 0:00:00:00.000
Starting - RemoteOperationList
Backend event: List - Started: ()
Listing remote folder ...
Backend event: List - Completed: (833 bytes)
RemoteOperationList took 0:00:00:23.087
Starting - ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting")
ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting") took 0:00:00:00.001
Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "MissingBlocks-HASHHASHHASH" ("Hash" TEXT NOT NULL, "Size" INTEGER NOT NULL, "Restored" INTEGER NOT NULL)
ExecuteNonQuery: CREATE TEMPORARY TABLE "MissingBlocks-HASHHASHHASH" ("Hash" TEXT NOT NULL, "Size" INTEGER NOT NULL, "Restored" INTEGER NOT NULL) took 0:00:00:00.024
Starting - ExecuteNonQuery: INSERT INTO "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") SELECT DISTINCT "Block"."Hash", "Block"."Size", 0 AS "Restored" FROM "Block","Remotevolume" WHERE "Block"."VolumeID" = "Remotevolume"."ID" AND "Remotevolume"."Name" = "duplicati-b71ee529958e444b58e488870c9db1c18.dblock.zip.aes"
ExecuteNonQuery: INSERT INTO "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") SELECT DISTINCT "Block"."Hash", "Block"."Size", 0 AS "Restored" FROM "Block","Remotevolume" WHERE "Block"."VolumeID" = "Remotevolume"."ID" AND "Remotevolume"."Name" = "duplicati-b71ee529958e444b58e488870c9db1c18.dblock.zip.aes" took 0:00:00:00.005
Starting - ExecuteNonQuery: CREATE UNIQUE INDEX "MissingBlocks-HASHHASHHASH-Ix" ON "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored")
ExecuteNonQuery: CREATE UNIQUE INDEX "MissingBlocks-HASHHASHHASH-Ix" ON "MissingBlocks-HASHHASHHASH" ("Hash", "Size", "Restored") took 0:00:00:00.000
Starting - ExecuteReader: SELECT DISTINCT "MissingBlocks-HASHHASHHASH"."Hash", "MissingBlocks-HASHHASHHASH"."Size", "File"."Path", "BlocksetEntry"."Index" * 102400 FROM "MissingBlocks-HASHHASHHASH", "Block", "BlocksetEntry", "File" WHERE "File"."BlocksetID" = "BlocksetEntry"."BlocksetID" AND "Block"."ID" = "BlocksetEntry"."BlockID" AND "MissingBlocks-HASHHASHHASH"."Hash" = "Block"."Hash" AND "MissingBlocks-HASHHASHHASH"."Size" = "Block"."Size" AND "MissingBlocks-HASHHASHHASH"."Restored" = 0
ExecuteReader: SELECT DISTINCT "MissingBlocks-HASHHASHHASH"."Hash", "MissingBlocks-HASHHASHHASH"."Size", "File"."Path", "BlocksetEntry"."Index" * 102400 FROM "MissingBlocks-HASHHASHHASH", "Block", "BlocksetEntry", "File" WHERE "File"."BlocksetID" = "BlocksetEntry"."BlocksetID" AND "Block"."ID" = "BlocksetEntry"."BlockID" AND "MissingBlocks-HASHHASHHASH"."Hash" = "Block"."Hash" AND "MissingBlocks-HASHHASHHASH"."Size" = "Block"."Size" AND "MissingBlocks-HASHHASHHASH"."Restored" = 0 took 0:00:00:00.001
It’s been stuck there since yesterday afternoon.
Hmmm… Maybe I’ll make a copy of the target data and do a purge broken files.