Repairing corrupted/broken backup job

Hi. So I’m currently having issues with repairing my broken Duplicati backup. I’m running Duplicati 2.2.0.3_stable_2026-01-06 on Windows 10.
The job in question backs up my Windows Documents folder to a WebDAV target. It used to back up to an external HDD that became faulty, so I moved everything to WebDAV.

My problem is: Some (2 as far as I could tell) of the dblock files got corrupted.
I tried about everything I could find online, but to no avail.

Using the list-broken-files & test commands and by looking at the previous backup logs, I found 2 corrupted files. I then removed them from the backup.

After deleting (moving) the files, I repaired the db as told when trying to test again.
Then I tried to purge the broken files, but it failed with this error:

"2026-02-01 13:10:13 +01 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation PurgeBrokenFiles has failed\r\nException: Unable to create a new fileset for duplicati-20250215T100001Z.dlist.zip.aes because the resulting timestamp 15.02.2025 11:00:04 is larger than the next timestamp 15.02.2025 11:00:02"

So this is my current state right now:
list-broken-files lists the same 8 backed up files as broken across 18 filesets (25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8)
This is the current list of filesets as I got it from the list-filesets command.

 Listing filesets: 
 0	: 29.01.2026 10:00:00 (1436 files, 2,710 GB) 
 1	: 28.01.2026 10:00:00 (1436 files, 2,710 GB) 
 2	: 25.01.2026 10:00:00 (1436 files, 2,710 GB) 
 3	: 17.01.2026 14:57:25 (1436 files, 2,710 GB) 
 4	: 10.01.2026 10:00:00 (1436 files, 2,710 GB) 
 5	: 23.12.2025 10:00:01 (214 files, 43,665 GB), partial 
 6	: 05.12.2025 10:00:00 (1502 files, 11,735 GB) 
 7	: 30.10.2025 18:46:37 (1496 files, 6,054 GB) 
 8	: 05.10.2025 10:00:03 (1381 files, 52,898 GB), partial 
 9	: 05.10.2025 10:00:02 (1381 files, 52,898 GB), partial 
 10	: 05.10.2025 10:00:01 (1381 files, 52,898 GB), partial 
 11	: 01.10.2025 10:00:02 (1381 files, 52,898 GB), partial 
 12	: 01.10.2025 10:00:01 (1381 files, 52,898 GB), partial 
 13	: 30.09.2025 10:00:00 (1381 files, 52,898 GB) 
 14	: 26.09.2025 10:00:04 (1428 files, 72,107 GB), partial 
 15	: 26.09.2025 10:00:03 (1428 files, 72,107 GB), partial 
 16	: 26.09.2025 10:00:02 (1428 files, 72,107 GB), partial 
 17	: 26.09.2025 10:00:01 (1428 files, 72,107 GB), partial 
 18	: 29.08.2025 10:00:00 (1393 files, 58,608 GB) 
 19	: 25.07.2025 19:09:42 (1393 files, 58,607 GB) 
 20	: 22.06.2025 07:06:12 (1291 files, 58,566 GB) 
 21	: 18.05.2025 10:28:18 (1291 files, 58,566 GB) 
 22	: 24.03.2025 06:21:35 (1277 files, 55,324 GB) 
 23	: 15.02.2025 10:00:03 (1161 files, 55,112 GB), partial 
 24	: 15.02.2025 10:00:02 (1161 files, 55,112 GB), partial 
 25	: 15.02.2025 10:00:01 (1161 files, 55,112 GB), partial 
 26	: 14.02.2025 10:33:49 (1153 files, 51,121 GB) 
 Return code: 0 

At some point during my repair attempts, something in my DB apparently broke and now I also can’t even test the files anymore. If I run test all, I get the following output (filenames redacted):

 The operation Test has failed => Found inconsistency in the following files while validating database: 
C:\Users\mamasch19\Documents\SOMEFILE, actual size 873572414, dbsize 764311614, blocksetid: 14379
C:\Users\mamasch19\Documents\SOMEFILE, actual size 680823897, dbsize 595831897, blocksetid: 14386
C:\Users\mamasch19\Documents\SOMEFILE, actual size 35389488, dbsize 30986288, blocksetid: 14388
C:\Users\mamasch19\Documents\SOMEFILE, actual size 687339195, dbsize 601527995, blocksetid: 14391
C:\Users\mamasch19\Documents\SOMEFILE, actual size 715440573, dbsize 625868800, blocksetid: 14394
... and 3 more. Run repair to fix it. 
 
 
 ErrorID: DatabaseInconsistency 
 Found inconsistency in the following files while validating database: 
C:\Users\mamasch19\Documents\SOMEFILE, actual size 873572414, dbsize 764311614, blocksetid: 14379
C:\Users\mamasch19\Documents\SOMEFILE, actual size 680823897, dbsize 595831897, blocksetid: 14386
C:\Users\mamasch19\Documents\SOMEFILE, actual size 35389488, dbsize 30986288, blocksetid: 14388
C:\Users\mamasch19\Documents\SOMEFILE, actual size 687339195, dbsize 601527995, blocksetid: 14391
C:\Users\mamasch19\Documents\SOMEFILE, actual size 715440573, dbsize 625868800, blocksetid: 14394
... and 3 more. Run repair to fix it. 
 Return code: 100 

If I run repair as suggested, it gives me this error:

 The operation Repair has failed => Some zero-length metadata entries could not be repaired. 
 
 
 ErrorID: MetadataRepairFailed 
 Some zero-length metadata entries could not be repaired. 
 Return code: 100 

When I try to purge the broken files using purge-broken-files, it fails with this error:

 The operation PurgeBrokenFiles has failed => Unable to create a new fileset for duplicati-20250215T100001Z.dlist.zip.aes because the resulting timestamp 15.02.2025 11:00:04 is larger than the next timestamp 15.02.2025 11:00:02 
 
 
 System.Exception: Unable to create a new fileset for duplicati-20250215T100001Z.dlist.zip.aes because the resulting timestamp 15.02.2025 11:00:04 is larger than the next timestamp 15.02.2025 11:00:02
   at Duplicati.Library.Main.Operation.PurgeFilesHandler.DoRunAsync(IBackendManager backendManager, LocalPurgeDatabase db, IFilter filter, Func`4 filtercommand, Single pgoffset, Single pgspan)
   at Duplicati.Library.Main.Operation.PurgeBrokenFilesHandler.RunAsync(IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, Func`3 method)
   at Duplicati.Library.Main.Controller.PurgeBrokenFiles(IFilter filter)
   at Duplicati.CommandLine.Commands.PurgeBrokenFiles(TextWriter outwriter, Action`1 setup, List`1 args, Dictionary`2 options, IFilter filter)
   at Duplicati.CommandLine.Program.ParseCommandLine(TextWriter outwriter, Action`1 setup, Boolean& verboseErrors, String[] args)
   at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String[] args) 
 Return code: 100 

It seems like it tries to create a new fileset without the broken files a few seconds later than the original, but (for whatever reason) there are 3 filesets within 3 seconds.

Moving the “deleted” files back into the remote storage does not seem to change anything.

The only other option I could think of was to fully delete the filesets affected by the broken dblock files or to completely recreate the backup job.
But before doing so, I wanted to ask here for any other possible solution.

Here are (some of) the guides/posts i followed:

Thanks in advance for looking into it. :smiley: