Database repair failed, would like to see if I can debug and manually fix it?

Long story short, the hard drive with sqlite database died. So I rebuilt the job from a saved .json file, then started the repair process. I’ve tried this multiple times and it fails all the same way. I watch it on the live verbose settings, but I don’t watch it non-stop. I know it at least gets to around working on file 7100 of 7200 files. Sometimes after that point, my live verbose settings state all this:

* Mar 7, 2023 11:41 AM: The operation Backup has failed with error: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.

* Mar 7, 2023 11:41 AM: HTTP Response: 201 - Created: {"username":"myemail","ip":"73.20.9.184","backupsetname":"Z drive -> Adam","result":"Failed","errors":"Duplicati Backup report for Z drive\n\nFailed: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.\nDetails: Duplicati.Library.Interface.UserInformationException: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.\n at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00122] in <e60bc008dd1b454d861cfacbdd3760b9>:0 \n at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 \n at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <e60bc008dd1b454d861cfacbdd3760b9>:0 \n at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <e60bc008dd1b454d861cfacbdd3760b9>:0 \n at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0011c] in <e60bc008dd1b454d861cfacbdd3760b9>:0","backupset_id":10277,"updated_at":"2023-03-07 19:41:53","created_at":"2023-03-07 19:41:53","id":19242824}

* Mar 7, 2023 11:41 AM: The operation Backup has started

* Mar 7, 2023 11:41 AM: The operation Repair has failed with error: SQLite error cannot rollback - no transaction is active

My stored logs have a couple of long errors, but to save space, this is the part where the call stack indicates a Duplicati error with Sqlite

 at Mono.Data.Sqlite.SqliteTransaction.Dispose (System.Boolean disposing) [0x0001e] in <3903fae468ee4cf992c5b1715eb9eb97>:0 
  at System.Data.Common.DbTransaction.Dispose () [0x00000] in <fb434c75f80444cb80ba3a9212e0956f>:0 
  at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, System.Boolean updating, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, 

I could just delete the backed up files, then spend a few days starting this process over again. But I’d like attempt to dig into this more and see if there is a specific bug or reason for it. Any ideas on how to begin narrowing down the fault?

It seems that you are starting a backup (Operation backup started). Did you try to just run an explicit rebuild without backup ? I think that’s the normal procedure after a database loss.

You could try to run the repair operation from a terminal with --console-log-level=verbose and save the output in a file to get a better view of the problems (you can export the job as command line and basically replace ‘backup’ by ‘repair’, should work after a few adjustements such as removing the backup targets).
Possibly you could zip the output and attach it here.

Yes, I am doing the full explicit rebuild process. The UI button I click on is “[Recreate (delete and repair)]”

I can run a rebuild from the console and forward all standard out and standard err to a log file. I’ll try that next.

While I know rebuild is sometimes slow (something else to work on…), anything noteworthy seen yet?
On the progress bar (if still running), anything past 70% is dblock downloading. Past 90% can be slow.

It’s coming. Currently on processing filelist 471 of 612.

Repairs are slow on 1TB backups, and when the backup machine has an old intel atom processor. First attempt I goofed up because I forgot to both put process in the background and disown it. Second attempt failed due to a power outage.

Seems if I had this pull request, I might be done by now: EXPERIMENTAL rebuild database optimization by gpatel-fr · Pull Request #4890 · duplicati/duplicati · GitHub

Finally finished. It gets to the very end, then dies

Processing filelist volume 610 of 612
Backend event: Get - Started: duplicati-20230219T080002Z.dlist.zip.aes (13.25 MB)
  Downloading file (13.25 MB) ...
Operation Get with file duplicati-20230219T080002Z.dlist.zip.aes attempt 1 of 5 failed with message: An established connection was aborted by the server. => An established connection was aborted by the server.
Backend event: Get - Retrying: duplicati-20230219T080002Z.dlist.zip.aes (13.25 MB)
Backend event: Get - Started: duplicati-20230219T080002Z.dlist.zip.aes (13.25 MB)
  Downloading file (13.25 MB) ...
Backend event: Get - Completed: duplicati-20230219T080002Z.dlist.zip.aes (13.25 MB)
Processing filelist volume 611 of 612
Backend event: Get - Started: duplicati-20230220T080002Z.dlist.zip.aes (13.25 MB)
  Downloading file (13.25 MB) ...
Operation Get with file duplicati-20230220T080002Z.dlist.zip.aes attempt 1 of 5 failed with message: An established connection was aborted by the server. => An established connection was aborted by the server.
Backend event: Get - Retrying: duplicati-20230220T080002Z.dlist.zip.aes (13.25 MB)
Backend event: Get - Started: duplicati-20230220T080002Z.dlist.zip.aes (13.25 MB)
  Downloading file (13.25 MB) ...
Backend event: Get - Completed: duplicati-20230220T080002Z.dlist.zip.aes (13.25 MB)
Processing filelist volume 612 of 612

Mono.Data.Sqlite.SqliteException (0x80004005): SQLite error^M
cannot rollback - no transaction is active
  at Mono.Data.Sqlite.SQLite3.Reset (Mono.Data.Sqlite.SqliteStatement stmt) [0x00096] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x00046] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00129] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x00051] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader..ctor(Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
  at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at Mono.Data.Sqlite.SqliteTransaction.IssueRollback (Mono.Data.Sqlite.SqliteConnection cnn) [0x00012] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at Mono.Data.Sqlite.SqliteTransaction.Rollback () [0x00008] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at Mono.Data.Sqlite.SqliteTransaction.Dispose (System.Boolean disposing) [0x0001e] in <3903fae468ee4cf992c5b1715eb9eb97>:0
  at System.Data.Common.DbTransaction.Dispose () [0x00000] in <fb434c75f80444cb80ba3a9212e0956f>:0
  at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, System.Boolean updating, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x00af2] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run (System.String path, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x00037] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal (Duplicati.Library.Utility.IFilter filter) [0x000ba] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.Library.Main.Operation.RepairHandler.Run (Duplicati.Library.Utility.IFilter filter) [0x00012] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.Library.Main.Controller+<>c__DisplayClass18_0.<Repair>b__0 (Duplicati.Library.Main.RepairResults result) [0x0001c] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x00007] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.Library.Main.Controller.Repair (Duplicati.Library.Utility.IFilter filter) [0x0001a] in <e60bc008dd1b454d861cfacbdd3760b9>:0
  at Duplicati.CommandLine.Commands.Repair (System.IO.TextWriter outwriter, System.Action`1[T] setup, System.Collections.Generic.List`1[T] args, System.Collections.Generic.Dictionary`2[TKey,TValue] options, Duplicati.Library.Utility.IFilter filter) [0x00030] in <eb902dee3a2c4c3fbc8192580b4a42b2>:0
  at (wrapper delegate-invoke) System.Func`6[System.IO.TextWriter,System.Action`1[Duplicati.Library.Main.Controller],System.Collections.Generic.List`1[System.String],System.Collections.Generic.Dictionary`2[System.String,System.String],Duplicati.Library.Utility.IFilter,System.Int32].invoke_TResult_T1_T2_T3_T4_T5(System.IO.TextWriter,System.Action`1<Duplicati.Library.Main.Controller>,System.Collections.Generic.List`1<string>,System.Collections.Generic.Dictionary`2<string, string>,Duplicati.Library.Utility.IFilter)
  at Duplicati.CommandLine.Program.ParseCommandLine (System.IO.TextWriter outwriter, System.Action`1[T] setup, System.Boolean& verboseErrors, System.String[] args) [0x00342] in <eb902dee3a2c4c3fbc8192580b4a42b2>:0
  at Duplicati.CommandLine.Program.RunCommandLine (System.IO.TextWriter outwriter, System.IO.TextWriter errwriter, System.Action`1[T] setup, System.String[] args) [0x00002] in <eb902dee3a2c4c3fbc8192580b4a42b2>:0


For context, this was the repair command I ran (edited out important parts). I got this command from the duplicati web UI, and I added in the console-log-level=verbose option.

sudo mono /usr/lib/duplicati/Duplicati.CommandLine.exe repair "ssh://someurl:2222//home/zdrive/backup?auth-username=something&auth-password=something&ssh-fingerprint=ssh-something" --backup-name="Z drive" --dbpath=/usr/lib/duplicati/data/EFZQVVNWQU.sqlite --encryption-module=aes --compression-module=zip --dblock-size=200MB --passphrase=something --no-auto-compact=true --send-http-url="https://www.duplicati-monitoring.com/log/something" --asynchronous-concurrent-upload-limit=1 --disable-module=console-password-input --exclude=/external/sharebackup/The_no_backup_folder/ --console-log-level=verbose &> output.txt

Also, it seems that something is dropping the SSH connection perhaps every 60 minutes. Duplicati has no problem reconnecting and resuming. I’m almost positive that’s not the issue.

Past that, now I need to figure out exactly what’s happening when the repair completes its last step. Somewhere there is an SQL error. Any way to see what sequence of steps happens at this point, or to print out all possible SQL statements that are being executed?

Hello

The database repair has 3 steps:

  • download the file lists (one file list === one version)

  • download the index files
    if everything is all right, there is no third step. If not:

  • a number of block files are read to try to fix bad or missing index files.

Your repair is failing at step 1, not at the end.

While I have never tested a repair with such much versions, I don’t think that downloading 600 small files should normally take more than half an hour even on a slow link by modern standards. So the network problem could really be an issue.

Don’t know how it could be a network issue. I’ve always assumed it’s a CPU bottleneck. The place I’m backing up from has a 40mbps up, and my end easily has more than 40 mbps down

What happens is that it quickly transfers one of these .aes files. Then I can watch mono peg at near 100% CPU usage for a long time. It’s a slow computer on an older Intel atom processor (specifically it’s a UDOO, one of the first x64 competitors to a raspberry pi). Processing that file can take a good 10 minutes or more.

are you referring to a dlist.zip.aes file here ? If yes, I’d be curious to get a profiling log, at least the beginning - as profiling make the whole process several times slower, you have no chance to see it terminate - but seeing the queries executed when you have this wait of 10 minutes with the CPU at 100% could be interesting.

For your information, the PR you linked to is not related to the phase 1 (handling of dlist files), as I have never seen Duplicati take a lot of time at this point. Maybe there is a need for better queries at this step, but I don’t know why.

An example of a profiling log to read a dlist of a backup of C:\backup source\A.txt looks is below.

2023-03-15 18:40:14 -04 - [Verbose-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-ProcessingFilelistVolumes]: Processing filelist volume 1 of 2
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Fileset" ("OperationID", "Timestamp", "VolumeID", "IsFullBackup") VALUES (1, 1678919836, 1, 0); SELECT last_insert_rowid();
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Fileset" ("OperationID", "Timestamp", "VolumeID", "IsFullBackup") VALUES (1, 1678919836, 1, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: UPDATE "Fileset" SET "IsFullBackup" = True WHERE "ID" = 1;
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: UPDATE "Fileset" SET "IsFullBackup" = True WHERE "ID" = 1; took 0:00:00:00.001
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "PathPrefix" WHERE "Prefix" = "C:\backup source\"
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "PathPrefix" WHERE "Prefix" = "C:\backup source\" took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "PathPrefix" ("Prefix") VALUES ("C:\backup source\"); SELECT last_insert_rowid(); 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "PathPrefix" ("Prefix") VALUES ("C:\backup source\"); SELECT last_insert_rowid();  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "Length" = 1 AND "FullHash" = "VZrq0IJk1XldOQlxjN0Fq9SVcuhP5VWQ7vMaiKCP3/0=" 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "Length" = 1 AND "FullHash" = "VZrq0IJk1XldOQlxjN0Fq9SVcuhP5VWQ7vMaiKCP3/0="  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Blockset" ("Length", "FullHash") VALUES (1,"VZrq0IJk1XldOQlxjN0Fq9SVcuhP5VWQ7vMaiKCP3/0="); SELECT last_insert_rowid();
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Blockset" ("Length", "FullHash") VALUES (1,"VZrq0IJk1XldOQlxjN0Fq9SVcuhP5VWQ7vMaiKCP3/0="); SELECT last_insert_rowid(); took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "Metadataset"."ID" FROM "Metadataset","Blockset" WHERE "Metadataset"."BlocksetID" = "Blockset"."ID" AND "Blockset"."FullHash" = "UW7CCxtLGw1gqSW6qxiGU8Xd/arQWsoW55bRZeAeQug=" AND "Blockset"."Length" = 137 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "Metadataset"."ID" FROM "Metadataset","Blockset" WHERE "Metadataset"."BlocksetID" = "Blockset"."ID" AND "Blockset"."FullHash" = "UW7CCxtLGw1gqSW6qxiGU8Xd/arQWsoW55bRZeAeQug=" AND "Blockset"."Length" = 137  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "Length" = 137 AND "FullHash" = "UW7CCxtLGw1gqSW6qxiGU8Xd/arQWsoW55bRZeAeQug=" 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "Blockset" WHERE "Length" = 137 AND "FullHash" = "UW7CCxtLGw1gqSW6qxiGU8Xd/arQWsoW55bRZeAeQug="  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Blockset" ("Length", "FullHash") VALUES (137,"UW7CCxtLGw1gqSW6qxiGU8Xd/arQWsoW55bRZeAeQug="); SELECT last_insert_rowid();
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Blockset" ("Length", "FullHash") VALUES (137,"UW7CCxtLGw1gqSW6qxiGU8Xd/arQWsoW55bRZeAeQug="); SELECT last_insert_rowid(); took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "Metadataset" ("BlocksetID") VALUES (2); SELECT last_insert_rowid();
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "Metadataset" ("BlocksetID") VALUES (2); SELECT last_insert_rowid(); took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT "ID" FROM "FileLookup" WHERE "PrefixID" = 1 AND "Path" = "A.txt" AND "BlocksetID" = 1 AND "MetadataID" = 1 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT "ID" FROM "FileLookup" WHERE "PrefixID" = 1 AND "Path" = "A.txt" AND "BlocksetID" = 1 AND "MetadataID" = 1  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: INSERT INTO "FileLookup" ("PrefixID", "Path", "BlocksetID", "MetadataID") VALUES (1,"A.txt",1,1); SELECT last_insert_rowid();
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "FileLookup" ("PrefixID", "Path", "BlocksetID", "MetadataID") VALUES (1,"A.txt",1,1); SELECT last_insert_rowid(); took 0:00:00:00.000

The “Fileset” work at the top is probably once per dlist, but each file probably deals with “Blockset” (file data), “MetadataSet” (file attributes), “PathPrefix”, and “FileLookup”. The latter two form a space-efficient way to store the file path. Previously it stored full path intact. Now it stores the path’s prefix separately…

Looking at a second backup where file B.txt was added, one can see it finishing similarly and moving on.

2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: INSERT INTO "FileLookup" ("PrefixID", "Path", "BlocksetID", "MetadataID") VALUES (1,"B.txt",3,2); SELECT last_insert_rowid(); took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "Key", "Value" FROM "Configuration"  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "Block" WHERE "Size" > 102400 took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Configuration" 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Configuration"  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("blocksize", "102400") 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("blocksize", "102400")  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("blockhash", "SHA256") 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("blockhash", "SHA256")  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("filehash", "SHA256") 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("filehash", "SHA256")  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("passphrase", "no-encryption") 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("passphrase", "no-encryption")  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("repair-in-progress", "true") 
2023-03-15 18:40:14 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES ("repair-in-progress", "true")  took 0:00:00:00.000
2023-03-15 18:40:14 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitUpdateFilesetFromRemote]: Starting - CommitUpdateFilesetFromRemote
2023-03-15 18:40:15 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-CommitUpdateFilesetFromRemote]: CommitUpdateFilesetFromRemote took 0:00:00:00.141
2023-03-15 18:40:15 -04 - [Information-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-FilelistsRestored]: Filelists restored, downloading 2 index files

For testing (because system is slow and there are lots of versions), add --version=0 to just do latest.
This might also prove or disprove a wild guess that you ran out of /tmp space by doing so many dlist.

Unable to recreate database, fails with “SQLite error cannot rollback - no transaction is active”
has a similar-looking stack, found by Google search on a couple of the things you had. Thanks for that.

Although profiling is pretty noisy, you can add --profile-all-database-queries for even more.
This will probably make a bigger difference later on though when the individual blocks get handled.
Going up from verbose to profiling (by log-file=<path> and log-file-log-level) might be sufficient.

Forgot to update. I got the database to repair, but not after running into space issues a few times. Note to self: Make sure the SQLite database has several GB of room. In my case, I needed about 8 GB of extra space to work properly.