Advice on validating/recovering/recreating database

Things like this might be a dindex file mess up. There was once a bug where two dindex files would be made for a single dblock, then dblock/dindex pair would get deleted, so a leftover dindex pointing to a now-gone dblock could happen. Or it’s possible something went wrong on the dblock side. Regardless, that dindex isn’t useful any more, and can be deleted if you want to save yourself some warnings.

I thought you were running a verbose log showing how far towards all-files it was:

That message was sadly unspecific…

These are pretty harmless, and wouldn’t even be shown if Verbose wasn’t in use.

I think these are caused by disagreement over which dblock lays claim to a block. Code was added by a previous developer. Current dev once made a comment on this, but I’m not so convinced it’s a dindex problem instead of confusion in dblock. Possibly it’s also normal (despite a dev putting in logging on it). There are blocks sometimes repeated in different dblock files due to deleted source file blocks not being scavenged for re-use. Instead, the same block can be put in backup again. Database has info to know the difference, but without DB, e.g. Recreate, there is actually a block stored in two places. If dindex files are used, these wind up in the DuplicateBlock table, but if not (and maybe only if in pass 3 of dblocks), logging occurs.

I’m not sure if this will get dev’s attention to comment, and I couldn’t repro it easily because I was having trouble getting to pass 3 (or maybe for some other reasons).

Give it a whirl. This one’s been messy enough that I’m not going to try a forecast. Another option would be purge-broken-files, since list-broken-files is complaining. Sometimes the database recreate knows it didn’t finish, and will block operations. purge-broken-files is sometimes a way out, but version delete can also often help.

Attempting to delete fileset 16…

GUI->Command Line, select delete, remove command-line arguments, remove all the --exclude fileters from the options (using edit as text), and then add --version=16.

Finished!

            
  Listing remote folder ...
The operation Delete has failed with error: Unexpected number of deleted filesets 5 vs 6 => Unexpected number of deleted filesets 5 vs 6

System.Exception: Unexpected number of deleted filesets 5 vs 6
   at Duplicati.Library.Main.Database.LocalDeleteDatabase.DropFilesetsFromTable(DateTime[] toDelete, IDbTransaction transaction)+MoveNext()
   at System.Collections.Generic.LargeArrayBuilder`1.AddRange(IEnumerable`1 items)
   at System.Collections.Generic.EnumerableHelpers.ToArray[T](IEnumerable`1 source)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRunAsync(LocalDeleteDatabase db, ReusableTransaction rtr, Boolean hasVerifiedBackend, Boolean forceCompact, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.DeleteHandler.RunAsync(IBackendManager backendManager)
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, Func`3 method)
   at Duplicati.Library.Main.Controller.Delete()
   at Duplicati.CommandLine.Commands.Delete(TextWriter outwriter, Action`1 setup, List`1 args, Dictionary`2 options, IFilter filter)
   at Duplicati.CommandLine.Program.ParseCommandLine(TextWriter outwriter, Action`1 setup, Boolean& verboseErrors, String[] args)
   at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String[] args)
Return code: 100

EDIT 1
Looking at the log I think it is trying to delete filesets that don’t exist?

My fault - it wants to delete daily filesets from 2025 - I was thinking 2024.

At any rate here is the log:


Apr 3, 2025 5:55 PM: The operation Delete has failed with error: Unexpected number of deleted filesets 5 vs 6
Apr 3, 2025 5:55 PM: Deleting 6 remote fileset(s) ...
Apr 3, 2025 5:55 PM: All backups to delete: 3/20/2025 2:14:52 PM, 3/18/2025 8:40:53 AM, 3/3/2025 1:00:00 AM, 2/24/2025 1:00:00 AM, 3/31/2024 2:00:01 AM
Apr 3, 2025 5:55 PM: Backups outside of all time frames and thus getting deleted: 3/31/2024 2:00:01 AM
Apr 3, 2025 5:55 PM: Backups to consider: 3/20/2025 2:14:52 PM, 3/18/2025 8:40:53 AM, 3/17/2025 2:00:00 AM, 3/10/2025 2:00:00 AM, 3/3/2025 1:00:00 AM, 2/24/2025 1:00:00 AM, 2/17/2025 1:00:00 AM, 1/11/2025 1:00:00 AM, 12/7/2024 1:00:00 AM, 11/1/2024 2:00:00 AM, 9/27/2024 2:00:00 AM, 8/23/2024 2:00:00 AM, 7/19/2024 2:00:00 AM, 6/13/2024 6:55:28 AM, 5/6/2024 6:55:27 PM, 3/31/2024 2:00:01 AM
Apr 3, 2025 5:55 PM: Time frames and intervals pairs: 7.00:00:00 / 1.00:00:00, 28.00:00:00 / 7.00:00:00, 365.00:00:00 / 31.00:00:00
Apr 3, 2025 5:55 PM: Start checking if backups can be removed
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-be08363c5955841a884a54b377b9e3b80.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-bbac5b08655bf4a83821420bdf47e0ad7.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-be37f48f685694d5c8992cf3012996551.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-bf4455ddb00ea42869b977d38be35c3ae.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-b9b4c15447af5438c967405912e271a37.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-be51ecf1a194240059cdd1c5d470eddf5.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-ba34643f3b39e42f4a0e9d625836d3317.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-bc57487279501407f87c228974edd42cd.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-ba7dd74eac3104db8aef4975f4c1dc998.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-b0d48e7e01f734ebc8c55f39800084036.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-b96665dae446b4aa4aea59d9e4824919e.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-b817d43ff418e42659f190ed6ebfe0080.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-beb824dc12e0d448bb18ef72ebdf0ccdf.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-b699f94458aef4508a7e852b966d4a3c7.dblock.zip.aes
Apr 3, 2025 5:55 PM: Removing file listed as Temporary: duplicati-b6dff175c02fd4a6591a1898ef327f4f7.dblock.zip.aes
Apr 3, 2025 5:55 PM: Backend event: List - Completed: (13.795 KiB)
Apr 3, 2025 5:55 PM: Backend event: List - Started: ()
Apr 3, 2025 5:54 PM: The operation Delete has started 

EDIT 2

Found a way to make it work. It wants to auto-delete 5 filesets (including 16) and doesn’t like it when I am including one of those filesets with --version=16. But it seems to be trying when I give it --version=15 (which it was not going to auto-delete).

I think there may be a bug where your --version is added to retention policy work which wanted to delete 5. I’m not sure it realized that yours is one already selected.

You can record your retention policy, change setting to keep all, and try yours again.

I was able to delete fileset 15, which also auto-deleted the others.

list-broken-files then only showed fileset 4 as have broken files, so I deleted that, with no apparent errors.

Now when I do the list-broken-files command, it returns

The operation ListBrokenFiles has failed with error: Cannot continue because the database is marked as being under repair, but does not have broken files. => Cannot continue because the database is marked as being under repair, but does not have broken files.

ErrorID: CannotListOnDatabaseInRepair
Cannot continue because the database is marked as being under repair, but does not have broken files.

[In case it matters, I am using duplicati-cli for these commands]

As a precaution I stopped the server process and made a copy of the database.

I then restart the server and try a repair, and it returns

The operation Repair has started
The database is marked as "in-progress" and may be incomplete.
The operation Repair has failed with error: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the repair process is not allowed to alter remote files as that could result in data loss. => The database was attempted repaired, but the repair did not complete. This database may be incomplete and the repair process is not allowed to alter remote files as that could result in data loss.

ErrorID: DatabaseIsInRepairState
The database was attempted repaired, but the repair did not complete. This database may be incomplete and the repair process is not allowed to alter remote files as that could result in data loss.

I hit this circular problem myself here. You can search for RepairInProgress in there.

The path to removing in-progress is purge-broken-files, but it runs list-broken-files which finds no broken files, so blocks what you want – removal of in-progress.

I don’t think I got dev feedback on clearing that marker. Maybe we’ll hear tomorrow?

Another thing I could try is to upgrade to v2.1.0.112 - Canary (currently on .111).

The 2.1.0.112 release notice details changes, but none that look clearly relevant.

This deadlock is very solid. I just tested a different version, manually setting DB’s repair-in-progress to true. Repair won’t run because it’s properly concerned about trimming destination files to match what might be a partial database. We’ve discussed list-broken-files and purge-broken-files, but what I feel they’re misreporting is when they say nothing’s broken. I deleted (one at a time) a dlist, dindex, and dblock file (which was the only dblock, and it certainly broke files).

My hope had been that if I could break something, it would notice, and so proceed.

I suppose we could try to get a clean Recreate. This is where downloading the files locally would be nice (keeping an original copy and one where we can experiment).

This is the kind of setup I had on the big mess I cited on my own backup. A way to avoid two local copies is to rclone sync from destination when doing another run.

I’d like to wait for dev advice, but response (not just here) has been behind recently.

Just to clarify the local database for others reaching this post, as I think both @jberry02 and @ts678 knows it.

Most operations Duplicati performs are faster if there is a database involved. The backup process requires a local database for speed reasons, and so does the restore process. You can set the --auto-recreate flag to get Duplicati to recreate the database if it is missing.

If you configure a backup configuration in the UI, it will point to a database (which may or may not exist). If you have --auto-recreate set, most operations will create the database for you. Once created, the database will be used for (almost) all operations.

If there is no database, Duplicati will do some operations on the remote storage (like listing which versions exist). If you go a bit further in the restore flow, Duplicati will create a partial temporary database with just the information needed for the operation. This file is stored in the temporary folder and deleted later.

A partially recreated database has a flag set that prevents using it for (almost) anything else than restore. Since you have a database, it sounds like it is a recreated database?

I have been chasing variants of this for a while. It sounds like you are running 2.1.0.111, and there is actually some extra coverage for this particular error in 2.1.0.112. The repair process should be able to detect the cause of this and fix the database for you. I am a bit worried that this error has snuck into a freshly recreated database though.

After what operation did it start giving this error?

Once the database is in this state it will not get out of it as @ts678 mentions. You can bravely edit the database and remove the flag, but you risk breaking the backup in unexpected ways. I would not recommend it.

I am always in favor of experimenting, but it depends on how much storage space and time @jberry02 is willing to spend on it.

For a quick run, I would use 2.1.0.112 and simply recreate the database.
Since some of the broken stuff has been removed by the purges, it should now recreate more cleanly. With the fixes from 2.1.0.112 for the fileset issue, I would hope it works right away.

If you have more time to spend on it, you can perhaps create a bug-report database. This is a copy of the database where all paths and path-like strings have been changed to something random-ish to not reveal the database contents. With the broken database (and ideally the databse before it broken on repair), I might be able to figure how to prevent this from happening.

That is a good theory! It would give weird results if that is the case, but the UI should not send in the retention values except on backups :thinking:

EDIT:

Although maybe I misunderstand (and I’m not watching UI traffic right now).
The retention values would be in the database, not sent each time, correct?

The database was created as a side-effect of doing the restore from a saved configuration file.

Version is indeed 2.1.0.111_canary_2025-03-15.

To be clear, the Unexpected number of deleted filesets 5 vs 6 is from issuing the “delete” command. I don’t remember at this point if it was from duplicati-cli or “command line” from within the GUI. I think ts678 indicated this was a known bug where the “delete” command determines it wants to delete expired filesets but then gets confused by the additional filesets provided on the command line?

As I recall I did something that let Duplicati delete the expired filesets, and then issued “delete” commands against the filesets with broken files.

I’m willing to give it a try.

Database is ~3GB. If you are still interested, please provide instructions (or point me to them) for creating a bug-report database and where to upload.

This is what I was referring to above. It was not a backup, but a “delete” command. Don’t recall if it was duplicati-cli or using “command line” via the GUI.

FWIW, I have the following copies of the local database:

-rw-------. 1 root root 11142778880 Jan  4 01:15 '/home/Duplicati-holding-pen/backup UMYFGVUULA 20250104082829.sqlite'
-rw-------. 1 root root   142233600 Mar 25 16:03  /home/Duplicati.HOLD/UMYFGVUULA.sqlite
-rw-r--r--. 1 root root   528691200 Mar 25 16:04 '/home/Duplicati.HOLD/backup UMYFGVUULA 20250104082829.sqlite'
-rw-------. 1 root root  2899038208 Apr  3 19:03  /home/Duplicati-holding-pen/UMYFGVUULA.sqlite
-rw-------. 1 root root  2899038208 Apr  3 19:15  UMYFGVUULA.sqlite

The Jan 4 copy would be directly from a “safety” disk copy from laptop A prior to moving to laptop B.
The Mar 25 copies would be initial failed repair/recreate attempt(s).
The Apr 3 copy in the “holding pen” is probably post-recreate but before doing anything else.
The other Apr 3 copy is the current problem database - I will save a copy of this before upgrading to 112 and attempting another repair/recreate.

Sigh. Downloading dblock files.

Apr 11, 2025 9:57 PM: Backend event: Get - Completed: duplicati-b4b9a60ef4e324c3d8336fc57aa3c5168.dblock.zip.aes (100.446 MiB)
Apr 11, 2025 9:57 PM: Pass 1 of 3, processing blocklist volume 57 of 169 

Unless .112 has fewer passes or something, this is going to take another 2.5 days.

OK - This is an improvement! Repair/recreate only took ~9 hours. Here is the final “complete log”:


            {
  "MainOperation": "Repair",
  "RecreateDatabaseResults": {
    "MainOperation": "Repair",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.1.0.112 (2.1.0.112_canary_2025-03-26)",
    "EndTime": "2025-04-12T07:14:51.5783456Z",
    "BeginTime": "2025-04-11T22:17:14.6930579Z",
    "Duration": "08:57:36.8852877",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null,
    "BackendStatistics": {
      "RemoteCalls": 8749,
      "BytesUploaded": 0,
      "BytesDownloaded": 18524859228,
      "FilesUploaded": 0,
      "FilesDownloaded": 8748,
      "FilesDeleted": 0,
      "FoldersCreated": 0,
      "RetryAttempts": 0,
      "UnknownFileSize": 0,
      "UnknownFileCount": 0,
      "KnownFileCount": 0,
      "KnownFileSize": 0,
      "KnownFilesets": 0,
      "LastBackupDate": "0001-01-01T00:00:00",
      "BackupListCount": 0,
      "TotalQuotaSpace": 0,
      "FreeQuotaSpace": 0,
      "AssignedQuotaSpace": 0,
      "ReportedQuotaError": false,
      "ReportedQuotaWarning": false,
      "MainOperation": "Repair",
      "ParsedResult": "Success",
      "Interrupted": false,
      "Version": "2.1.0.112 (2.1.0.112_canary_2025-03-26)",
      "EndTime": "0001-01-01T00:00:00",
      "BeginTime": "2025-04-11T22:17:14.644535Z",
      "Duration": "00:00:00",
      "MessagesActualLength": 0,
      "WarningsActualLength": 0,
      "ErrorsActualLength": 0,
      "Messages": null,
      "Warnings": null,
      "Errors": null
    }
  },
  "ParsedResult": "Warning",
  "Interrupted": false,
  "Version": "2.1.0.112 (2.1.0.112_canary_2025-03-26)",
  "EndTime": "2025-04-12T07:16:28.516149Z",
  "BeginTime": "2025-04-11T22:17:14.6442829Z",
  "Duration": "08:59:13.8718661",
  "MessagesActualLength": 17504,
  "WarningsActualLength": 17,
  "ErrorsActualLength": 0,
  "Messages": [
    "2025-04-11 18:17:14 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started",
    "2025-04-11 18:17:15 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-04-11 18:17:21 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (13.437 KiB)",
    "2025-04-11 18:18:25 -04 - [Information-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-RebuildStarted]: Rebuild database started, downloading 10 filelists",
    "2025-04-11 18:18:25 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20240613T105528Z.dlist.zip.aes (37.390 MiB)",
    "2025-04-11 18:18:29 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20240613T105528Z.dlist.zip.aes (37.390 MiB)",
    "2025-04-11 18:18:29 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20240719T060000Z.dlist.zip.aes (37.609 MiB)",
    "2025-04-11 18:18:31 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20240719T060000Z.dlist.zip.aes (37.609 MiB)",
    "2025-04-11 18:19:40 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20240823T060000Z.dlist.zip.aes (37.741 MiB)",
    "2025-04-11 18:19:44 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20240823T060000Z.dlist.zip.aes (37.741 MiB)",
    "2025-04-11 18:20:27 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20240927T060000Z.dlist.zip.aes (37.619 MiB)",
    "2025-04-11 18:20:29 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20240927T060000Z.dlist.zip.aes (37.619 MiB)",
    "2025-04-11 18:21:14 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20241101T060000Z.dlist.zip.aes (37.477 MiB)",
    "2025-04-11 18:21:17 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20241101T060000Z.dlist.zip.aes (37.477 MiB)",
    "2025-04-11 18:22:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20241207T060000Z.dlist.zip.aes (39.520 MiB)",
    "2025-04-11 18:22:08 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20241207T060000Z.dlist.zip.aes (39.520 MiB)",
    "2025-04-11 18:22:56 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20250217T060000Z.dlist.zip.aes (27.799 MiB)",
    "2025-04-11 18:22:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20250217T060000Z.dlist.zip.aes (27.799 MiB)",
    "2025-04-11 18:23:58 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-20250310T060000Z.dlist.zip.aes (28.248 MiB)",
    "2025-04-11 18:24:00 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-20250310T060000Z.dlist.zip.aes (28.248 MiB)"
  ],
  "Warnings": [
    "2025-04-11 18:30:33 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b6dff175c02fd4a6591a1898ef327f4f7.dblock.zip.aes by duplicati-i171ff848acae4064bd29e0167b4ff851.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:33:47 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b699f94458aef4508a7e852b966d4a3c7.dblock.zip.aes by duplicati-i287b21d1ab7943a18d2e5df01cf002c0.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:34:38 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-beb824dc12e0d448bb18ef72ebdf0ccdf.dblock.zip.aes by duplicati-i2cde9078a113416ab910ed3f5d778878.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:38:04 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b817d43ff418e42659f190ed6ebfe0080.dblock.zip.aes by duplicati-i3f84f61f4ec9485291db92359709ed81.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:39:07 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b96665dae446b4aa4aea59d9e4824919e.dblock.zip.aes by duplicati-i4645a522a80b4bbca434120271ca8c4f.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:41:39 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b0d48e7e01f734ebc8c55f39800084036.dblock.zip.aes by duplicati-i548b89c8a04f4e10bff7052fedc70623.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:42:19 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-ba7dd74eac3104db8aef4975f4c1dc998.dblock.zip.aes by duplicati-i58182adeeca644de9791c9ad811531cb.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:50:45 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-bc57487279501407f87c228974edd42cd.dblock.zip.aes by duplicati-i846dbd8517e542a3bedf9188330f52b3.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:58:08 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-ba34643f3b39e42f4a0e9d625836d3317.dblock.zip.aes by duplicati-iae5792745775425c8dbc175cd865286e.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 18:59:32 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-be51ecf1a194240059cdd1c5d470eddf5.dblock.zip.aes by duplicati-ib5a680506f374b92b25478e3b6911525.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 19:02:44 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b9b4c15447af5438c967405912e271a37.dblock.zip.aes by duplicati-ic687e52ce32d4ff4885826844fc1a729.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 19:07:54 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-bf4455ddb00ea42869b977d38be35c3ae.dblock.zip.aes by duplicati-ie1c5e13c339d4471b3d5e019c6a9a650.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 19:08:16 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-be37f48f685694d5c8992cf3012996551.dblock.zip.aes by duplicati-ie4359d7fec084dce8ba44d88585fc409.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 19:11:52 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-bbac5b08655bf4a83821420bdf47e0ad7.dblock.zip.aes by duplicati-if6b73b10ce3d467d80a2fe1b343be03b.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 19:12:24 -04 - [Warning-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-be08363c5955841a884a54b377b9e3b80.dblock.zip.aes by duplicati-if93041e449044c2584f7af376cc7f0cc.dindex.zip.aes, but not found in list, registering a missing remote file",
    "2025-04-11 19:13:36 -04 - [Warning-Duplicati.Library.Main.Database.LocalRecreateDatabase-MissingVolumesDetected]: Replaced blocks for 15 missing volumes; there are now 15 missing volumes",
    "2025-04-12 03:12:40 -04 - [Warning-Duplicati.Library.Main.Database.LocalRecreateDatabase-MissingVolumesDetected]: Replaced blocks for 15 missing volumes; there are now 0 missing volumes"
  ],
  "Errors": [],
  "BackendStatistics": {
    "RemoteCalls": 8749,
    "BytesUploaded": 0,
    "BytesDownloaded": 18524859228,
    "FilesUploaded": 0,
    "FilesDownloaded": 8748,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 0,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 0,
    "KnownFileSize": 0,
    "KnownFilesets": 0,
    "LastBackupDate": "0001-01-01T00:00:00",
    "BackupListCount": 0,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": 0,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Repair",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.1.0.112 (2.1.0.112_canary_2025-03-26)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-04-11T22:17:14.644535Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}
        

EDIT 1
Live log from “list-broken-files”:

Apr 12, 2025 7:43 AM: The operation ListBrokenFiles has completed
Apr 12, 2025 7:43 AM: Skipping operation because no files were found to be missing, and no filesets were recorded as broken.
Apr 12, 2025 7:43 AM: Backend event: List - Completed: (13.437 KiB)
Apr 12, 2025 7:43 AM: Backend event: List - Started: ()
Apr 12, 2025 7:43 AM: No broken filesets found in database, checking for missing remote files
Apr 12, 2025 7:42 AM: The operation ListBrokenFiles has started
Apr 12, 2025 7:42 AM: The operation List has completed
Apr 12, 2025 7:42 AM: The operation List has started 

Thanks for creating the issue.
The “commandline” view sends all the values, so you should be able to see in the advanced options that it sends in the “retention policy” as well. If you send in that value at the same time it will likely cause problems. Maybe the solution is simply to prevent sending in multiple retention values at the same time?

This is the part that is confusing me. If you use this screen:

And choose either “Direct restore” or “Restore from configuration” it will create a temporary database in the system temp folder. It should never create or use a database in the regular application data folder.

Can you elaborate on how you got it to create the partial database? Did you instead use the “Add backup”, “Import from file”, and then started a restore on that one?

If I do that, I get an error:

Yes, I think that could explain it. If so, the fix is easy. Just remove any of the settings from the list of options:

  • keep-time
  • keep-versions
  • retention-policy

And it should work fine.

To create the bug report database:

You can upload it to Dropbox, Google Drive, iCloud, WeTransfer or whatever you like. Then send me a link in a PM.

So it is all good now?

JavaScript seems the wrong spot since true CLI and even internal C# code also delete.

The solution would be to deduplicate the request before doing it, but I think you just did:

Thanks for the fix.

My memory is that I did “Restore from configuration” and I am 95% sure my memory is correct. It definitely then took 2.5 days to download data from b2. Maybe I still had an out-of-date and out-of-sync database from my disk copy? Based on your comments that is the only thing I can suggest.

I definitely did NOT use “Add backup”.

I see no problems so far, but I have not yet attempted a backup while I wait for advice from either yourself or ts678.

(I have been backing-up using a tarball to another system which then backs up using Duplicati.)

Working on this… It will take a while.

EDIT - for the sake of anyone following this, links to the bugreports have been PM’d as requested.

I was thinking inside the C# delete code there would be a check, not (only) in the UI.
But yes, after checking the code, it was a simple fix, to allow the options to co-exist and fix the cases where two delete/retention options would target the same fileset.

I think this makes more sense as you can have multiple retention options (just not possible with the UI) and they will all be observed (most restrictive wins).

That would make some sense, but the “Restore from configuration” should always create a new temporary database. If you experience the issue again, I would love to hear how to reproduce it, as I have failed to do so.

At this point I think the only real issue with the local database recovery was the deadlock (can’t do something until you run repair, repair says no need to run). And as the database was then recreated after deleting the bad filesets (and using .112 instead of .111), I think I’m ready to try a backup, and if that works I think I am good-to-go.

So last chance to ask for any data prior to me trying a backup!