Error message about remote volumes marked as deleted

Greetings, I’ve looked through the posts from 2017, 2018, 2020, and 2021 but do not see a clear fix.

Figure 1: History of this job (only one affected by this issue)

Figure 2: Error message

So two things:

#1 - Is there an easy fix? If so, where are the steps documented or what should I do?

Assuming there is not, then…

#2 - I am OK with nuking the backup set (i.e. all data for this backup job) and starting from scratch. What would be the easiest way to go about that? I know that I can delete & recreate the database, but I don’t think this will purge the backup set on the repository.

Thanks!

I was able to fix this by purging the most recent handful of backups using the “–version=0-5” command etc. Reference Delete old versions of backup without running a new backup - #4 by Sinocelt

Great that you found a workaround.
Did this happen because there was an interrupted backup at some point?

It looks like the error happens if one of the remote dlist files are already marked for deletion.
When the delete command then tries to mark the dlist files, it only marks the ones that are not already marked, and this causes the error.

But I am not sure how to get into that state in the first place. Can you see the first failed one you have, if there are any hints as to how this could happen?

@TitaniumCoder477

I’m also curious about the history. Anything unusual before the problem?

Even without something going wrong, what caused a big version delete?

I’m also surprised that a 6 version delete worked, but the 8 version didn’t.

That does beg the question of whether 2 more will get this problem again.

Did you by any chance set Advanced option no-backend-verification?

If you forgot any popup warnings or errors, your job logs might have them.

About → Show log → Stored can also be good, if a job log was not written.

I’ll provide the first error’s “Complete log” JSON below. Not sure how helpful it will be, however. That said, my setup is kinda unique and may have brought this on myself. It’s proven so easy to bloat my Linux OS in past years that I started leaning heavily on distrobox to run app instances in podman containers. So basically any app that installs a lot of dependencies gets “containerized.” Duplicati is one of these. It has run flawlessly for years, so I don’t think that is the issue. Rather, my increasingly careless attitude towards putting my computer into standby mode while a backup job is running or finishing up is more likely the root cause. I run backups frequently enough that an occasional failure doesn’t bother me. This, however, was the first time I noticed consecutive failures. Duplicati was unable to recover or overcome the initial failure. I am glad it was easy to solve, but I think the lesson for me is to adjust my backup schedules to avoid the occasional times I put the computer to sleep or dual boot to Windows in order to avoid having to wait for a job to finish or interrupt it.

            {
  "DeletedFiles": 335,
  "DeletedFolders": 60,
  "ModifiedFiles": 616,
  "ExaminedFiles": 390175,
  "OpenedFiles": 1160,
  "AddedFiles": 544,
  "SizeOfModifiedFiles": 6460082964,
  "SizeOfAddedFiles": 278793571,
  "SizeOfExaminedFiles": 318915143297,
  "SizeOfOpenedFiles": 6785078874,
  "NotProcessedFiles": 0,
  "AddedFolders": 43,
  "TooLargeFiles": 0,
  "FilesWithError": 0,
  "ModifiedFolders": 0,
  "ModifiedSymlinks": 0,
  "AddedSymlinks": 0,
  "DeletedSymlinks": 0,
  "PartialBackup": false,
  "Dryrun": false,
  "MainOperation": "Backup",
  "CompactResults": null,
  "VacuumResults": null,
  "DeleteResults": {
    "DeletedSetsActualLength": 0,
    "DeletedSets": null,
    "Dryrun": false,
    "MainOperation": "Delete",
    "CompactResults": null,
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.0.7.103 (2.0.7.103_canary_2024-04-19)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-01-04T19:05:42.088823Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null,
    "BackendStatistics": {
      "RemoteCalls": 48,
      "BytesUploaded": 2262273683,
      "BytesDownloaded": 0,
      "FilesUploaded": 47,
      "FilesDownloaded": 0,
      "FilesDeleted": 0,
      "FoldersCreated": 0,
      "RetryAttempts": 0,
      "UnknownFileSize": 0,
      "UnknownFileCount": 0,
      "KnownFileCount": 10353,
      "KnownFileSize": 532683907015,
      "LastBackupDate": "2025-01-03T22:22:18-05:00",
      "BackupListCount": 66,
      "TotalQuotaSpace": 0,
      "FreeQuotaSpace": 0,
      "AssignedQuotaSpace": -1,
      "ReportedQuotaError": false,
      "ReportedQuotaWarning": false,
      "MainOperation": "Backup",
      "ParsedResult": "Success",
      "Interrupted": false,
      "Version": "2.0.7.103 (2.0.7.103_canary_2024-04-19)",
      "EndTime": "0001-01-01T00:00:00",
      "BeginTime": "2025-01-04T18:22:41.01632Z",
      "Duration": "00:00:00",
      "MessagesActualLength": 0,
      "WarningsActualLength": 0,
      "ErrorsActualLength": 0,
      "Messages": null,
      "Warnings": null,
      "Errors": null
    }
  },
  "RepairResults": null,
  "TestResults": null,
  "ParsedResult": "Fatal",
  "Interrupted": false,
  "Version": "2.0.7.103 (2.0.7.103_canary_2024-04-19)",
  "EndTime": "2025-01-04T19:07:06.701142Z",
  "BeginTime": "2025-01-04T18:22:41.016313Z",
  "Duration": "00:44:25.6848290",
  "MessagesActualLength": 106,
  "WarningsActualLength": 0,
  "ErrorsActualLength": 2,
  "Messages": [
    "2025-01-04 13:22:41 -05 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started",
    "2025-01-04 13:29:35 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-01-04 13:29:39 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (10.11 KB)",
    "2025-01-04 13:29:40 -05 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-KeepIncompleteFile]: keeping protected incomplete remote file listed as Uploading: duplicati-20250104T032218Z.dlist.zip.aes",
    "2025-01-04 13:29:43 -05 - [Information-Duplicati.Library.Main.Operation.Backup.UploadSyntheticFilelist-PreviousBackupFilelistUpload]: Uploading filelist from previous interrupted backup",
    "2025-01-04 13:30:19 -05 - [Information-Duplicati.Library.Main.Operation.Backup.RecreateMissingIndexFiles-RecreateMissingIndexFile]: Re-creating missing index file for duplicati-bc5a64c91f87c4c51a3621334afb0795f.dblock.zip.aes",
    "2025-01-04 13:30:20 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i3d48a5d4f335426d85ce9239852bbbd3.dindex.zip.aes (204.47 KB)",
    "2025-01-04 13:30:22 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i3d48a5d4f335426d85ce9239852bbbd3.dindex.zip.aes (204.47 KB)",
    "2025-01-04 13:42:59 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-beb23e6393348447194f51ea0c4f26201.dblock.zip.aes (99.94 MB)",
    "2025-01-04 13:42:59 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b4772d13cf1544c6aa22c98368096704b.dblock.zip.aes (99.95 MB)",
    "2025-01-04 13:43:03 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b2dbf8b8f58644395b63b1a65d70db9ef.dblock.zip.aes (99.94 MB)",
    "2025-01-04 13:47:02 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-beb23e6393348447194f51ea0c4f26201.dblock.zip.aes (99.94 MB)",
    "2025-01-04 13:47:02 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i3fe93b5f71d241908c6a3742d9c982b9.dindex.zip.aes (280.01 KB)",
    "2025-01-04 13:47:07 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i3fe93b5f71d241908c6a3742d9c982b9.dindex.zip.aes (280.01 KB)",
    "2025-01-04 13:47:07 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bb1074b0a515d4317bcea0a86d0abb839.dblock.zip.aes (99.95 MB)",
    "2025-01-04 13:48:23 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b2dbf8b8f58644395b63b1a65d70db9ef.dblock.zip.aes (99.94 MB)",
    "2025-01-04 13:48:24 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i4798b8e914d046c5aa305ec6fc3130c6.dindex.zip.aes (94.56 KB)",
    "2025-01-04 13:48:25 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i4798b8e914d046c5aa305ec6fc3130c6.dindex.zip.aes (94.56 KB)",
    "2025-01-04 13:48:25 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b20bda26db13a4afe9b2c549332c3f553.dblock.zip.aes (99.93 MB)",
    "2025-01-04 13:48:36 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b4772d13cf1544c6aa22c98368096704b.dblock.zip.aes (99.95 MB)"
  ],
  "Warnings": [],
  "Errors": [
    "2025-01-04 14:06:52 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error\nException: Unexpected number of remote volumes marked as deleted. Found 2 filesets, but 1 volumes",
    "2025-01-04 14:07:06 -05 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed with error: Unexpected number of remote volumes marked as deleted. Found 2 filesets, but 1 volumes\nException: Unexpected number of remote volumes marked as deleted. Found 2 filesets, but 1 volumes"
  ],
  "BackendStatistics": {
    "RemoteCalls": 48,
    "BytesUploaded": 2262273683,
    "BytesDownloaded": 0,
    "FilesUploaded": 47,
    "FilesDownloaded": 0,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 0,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 10353,
    "KnownFileSize": 532683907015,
    "LastBackupDate": "2025-01-03T22:22:18-05:00",
    "BackupListCount": 66,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Backup",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.0.7.103 (2.0.7.103_canary_2024-04-19)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-01-04T18:22:41.01632Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}
1 Like

Most helpful, it points to a situation similar to the one I described, so maybe it will be possible to reproduce.

That is exactly why I would like to fix it.

I am also facing this same issue (started in November, finally getting around to trying to fix it)

Unfortunately running the delete command with all sorts of different versions designations doesn’t seem to work for me - the delete command always fails with a variation of the same message about the fileset counts marked for delete being mismatched.

I looked a bit into this, and what it looks like happens is that one of the filesets are recorded as “uploading”, and this breaks the update logic. Would you be able to look at the database with a tool like SQLiteBrowser ?

From the UI you can click the “Database…” link to get the path to the database. Then open that one with SQLiteBrowser. Look at “Browse data”, then choose “RemoteVolume”.

Here you can see all remote files that Duplicati knows of. Filter the table by typing “Files” in the “Type” column. Now you should only see file lists. Then check the “State” column. You should only see “Uploaded” and “Verified” in this column. If you see something else, I would love to know as that would allow me to create a test that replicates the issue.

Variations matter. Can you post actual message? The two numbers are especially relevant.
You might be able to find the error in job logs, or server log at About → Show log → Stored.

If first number is higher, then the Remotevolume State theory may fit. Not sure about lower.
Although that’s not common, there have been reports and also a somewhat contrived repro.

Database inspection sounds like a worthwhile test. If too hard, and DB is not painfully huge,
Create bug report button could download a sanitized DB you can store and post a link to.

Sure thing. In my case the first number is always lower. I see that this makes it a different case than the OP report, hadn’t noticed that earlier. I can make a new topic for this if you’d like.

Here are the error messages from the past 5 backup runs (latest to oldest) - stack trace omitted from all but the first since they’re all identical:

System.Exception: Unexpected number of remote volumes marked as deleted. Found 68 filesets, but 69 volumes
   at Duplicati.Library.Main.Database.LocalDeleteDatabase.DropFilesetsFromTable(DateTime[] toDelete, IDbTransaction transaction)+MoveNext()
   at System.Collections.Generic.LargeArrayBuilder`1.AddRange(IEnumerable`1 items)
   at System.Collections.Generic.EnumerableHelpers.ToArray[T](IEnumerable`1 source)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBackend, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IFilter filter, CancellationToken token)
   at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter, CancellationToken token)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass17_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

System.Exception: Unexpected number of remote volumes marked as deleted. Found 67 filesets, but 68 volumes
System.Exception: Unexpected number of remote volumes marked as deleted. Found 50 filesets, but 51 volumes
System.Exception: Unexpected number of remote volumes marked as deleted. Found 49 filesets, but 50 volumes
System.Exception: Unexpected number of remote volumes marked as deleted. Found 48 filesets, but 49 volumes

If I try to run a delete command, I get similar errors but the numbers change:
With --version=0 I get:

System.Exception: Unexpected number of remote volumes marked as deleted. Found 52 filesets, but 53 volumes
   at Duplicati.Library.Main.Database.LocalDeleteDatabase.DropFilesetsFromTable(DateTime[] toDelete, IDbTransaction transaction)+MoveNext()
   at System.Collections.Generic.LargeArrayBuilder`1.AddRange(IEnumerable`1 items)
   at System.Collections.Generic.EnumerableHelpers.ToArray[T](IEnumerable`1 source)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBackend, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.DeleteHandler.Run()
   at Duplicati.Library.Main.Controller.<Delete>b__20_0(DeleteResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, Action`1 method)
   at Duplicati.Library.Main.Controller.Delete()
   at Duplicati.CommandLine.Commands.Delete(TextWriter outwriter, Action`1 setup, List`1 args, Dictionary`2 options, IFilter filter)
   at Duplicati.CommandLine.Program.ParseCommandLine(TextWriter outwriter, Action`1 setup, Boolean& verboseErrors, String[] args)
   at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String[] args)
Return code: 100

With --version=10 it’s Found 52 filesets, but 53 volumes as well.

I haven’t tried kenkendk’s suggestions although it sounds like that may not apply to the lower-first message I’m getting here. Happy to check it out though!

Probably best to wait for reaction to the news. If you want to do something in advance, the Create bug report would give a view of the two disagreeing tables. People have reported trouble getting completion for big jobs, but if it’s too awful, Task Manager kill should be safer than during a backup.

If you really want to look at DB, I can’t give you guidance as well as the developer can, but it’s:

and

that might somehow be out of alignment on internal Fileset versus external Remotevolume info.

EDIT:

The example only has one row. Per your report, you should have a lot more, but in both tables.

The Timestamp can be converted to a GMT date in epochconverter.com or similar, so mine is

February 24, 2025 9:30:14 PM which matches 20250224T213014 on the dlist file’s name.

If you prefer to post the bug report in some web-accessible storage, someone else can check it.

I’ve found a smoking gun in the Remotevolume table!

I have 70 Verified and a single Uploaded entry in this table. However, one of the records has an OperationID of 3, and its Hash is null. Based on the timestamp in the dlist filename, this is definitely related, as it’s the first timestamp after the last successful backup (last successful backup was 2024-11-07, and 2024-11-13 is the next timestamp we can see in the files)

image

Oh, and I can verify that the Fileset table has 70 rows while the Remotevolume table has 71 Files entries. (thanks for the tip on comparing those @ts678)

And the Remotevolume ID 12040 (the one with the null hash) is missing from the Fileset table

If this were me just throwing shit at the wall, I would try manually deleting the .aes file from the server and then removing that row from the Remotevolume table. But I’ve got no idea if that will blow everything up. Or if you’d like me to preserve this state for root cause debugging!

I’d prefer to wait for a more expert opinion, however in terms of root causes, I’d note that what I called a somewhat contrived repro required a DB recreate. You could look at the first row in the Operation table and see if it was a Backup or something else. If it looks like a normal start, then figuring out what went wrong later may be very difficult without a lot more logs to provide history.

Or maybe it will pop right out if the developer sees the DB bug report. Anyway, that’s a nice find.

Hi! Just wanted to chime in and mention that i’m facing the same issue. I’m glad to see some troubleshooting is happening; i’ll be happy to provide anything from my end if it helps.
My remote is set with Dropbox.


Welcome to the forum @mathdu

What Duplicati version is this? Some had a worse-than-usual Create bug report, but one from someone would still be nice. Alternatively, looking in the DB (as was done before) will help a little.

Yours is also the case with the lower first number, so maybe your two table sizes also misaligned. Question is how this happens. There’s a little info in the bug report, but ideally one sees it happen while running a nice log-file. Problem is that very few people will run one of those in advance.

As usual, developer input is welcome.

Thanks for the welcome :pray: and the quick reply!

My version is Duplicati - 2.1.0.3_beta_2025-01-22 on Windows 11.

here’s the bug report i could extract:
https://www.dropbox.com/scl/fi/crf5ou854yxc7h8shknxf/duplicati_bugreport.zip?rlkey=i5jt47xpo21ywenoqnls6ooo9&st=keal12yt&dl=0

Thanks for the bug report. What sort of destination is this? It seems to sometimes give trouble.

Below is from my first look, in the hope that developer will be able to draw some better answer:

Operation table shows Backup ID 433-443. Not looking earlier, and 444 is BugReportCreate.

Fileset table has 2 rows from Operation 433,443

Remotevolume table has 9 rows (ignoring Deleted). The last got Uploaded, so not Verified.
Lack of verification is probably due to end after put dlist, before the list (see below).

LogData table has logs for 433-443. Filter on Message to get an idea of what they report.

LogData table "Unexpected number of remote volumes marked as deleted" in 436-439,441-443.
Counts for filesets and volumes
436 0 1
437 0 2
438 0 3
439 0 4
441 0 5
442 0 6
443 0 7

LogData table "DeleteRemoteFileset" in 433

LogData table ""DeletedSetsActualLength"
433 1
434 1
435 (List was failing)
436 0
437 0
438 0
439 0
440 (List was failing)
441 0
442 0
443 0

LogData table "BackupListCount":2," in 433-434,437-439,441-443.
435 0 (List was failing)
436 1
440 0 (List was failing)

RemoteOperation table dlist count in initial list
433 2
434 2
435 (List was failing)
436 2
437 3
438 4
439 5
440 (List was failing)
441 6
442 7
443 8

RemoteOperation table
433 appears normal
434 appears normal
435 no operations
436 ends at put dlist
437 ends at put dlist
438 ends at put dlist
439 ends at put dlist
440 no operations
441 ends at put dlist
442 ends at put dlist
443 ends at put dlist

435 Messages
"2025-02-05 10:08:36 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started",
"2025-02-05 10:08:39 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
"2025-02-05 10:08:39 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Retrying:  ()",
"2025-02-05 10:08:49 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
"2025-02-05 10:08:49 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Retrying:  ()",
"2025-02-05 10:08:59 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
"2025-02-05 10:08:59 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Retrying:  ()",
"2025-02-05 10:09:10 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
"2025-02-05 10:09:10 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Retrying:  ()",
"2025-02-05 10:09:20 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
"2025-02-05 10:09:20 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Failed:  ()"

435 Errors
"2025-02-05 10:09:20 +01 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed with error: Non-json response: \r\nInvalidDataException: Non-json response: "

Non-json response: \r\n is in 425,435,440, but 425 did not seem harmful to 426.
Unclear to me how this started, but once it starts, table divergence increases.

That is unexpected! So somehow there is a file that is known to Duplicati, but it is lacking a fileset.

I think the missing file hash is part of the explanation. Usually, the file is created locally and the hash is recorded in the database. Files without a hash are usually only present after a database recreate where the files have not yet been accessed. But since this is a dlist file it should always be downloaded during recreate.

If the file “magically” appears on the remote storage, there should be an error during the backup where it verifies that all known files are there and that no new files are suddenly existing. But in this case the status is Verified, meaning that it is known.

I can see that the the other timestamps are around 2025-02-xx, so this is an older file which may also be a hint that it has reappeared?

It looks like there were backups running in 2025?

That points to the Operation table. You should be able to see what operation was running as “operation 3”.

If you can create a bugreport database, I would love to look at it. If not, can you perhaps delete the tables with filenames (and run VACUUM after), and share a copy with me?
The remainder of the tables should not contain any sensitive information, but you can manually verify.

Pretty sure that would work. To be safe, rename the file instead of deleting it. If it starts with something other than duplicati- it will be ignored. Also make a copy of the database before removing the entry.

I don’t think we will learn much more from the current state, but the logs in the database may help to trace down the cause.

Thanks! That makes it a lot easier to track down.

Thanks for the summary and analysis.

My summary of your summary is:

  • 433: regular backup
  • 434: regular backup, but Fileset is missing, uploads seems silently failing
  • 435: failed backup, never gets started as listing prior to verification failed
  • 436: failed due to missing Fileset

It is possible that some later operation has deleted the Fileset from the database, but given the error is detected in 436, I don’t think this is the case.

From what I can see the problem seems to start with operation 434.
This operation completed without any errors and marked a remote volume as deleted.
I am assuming this operating also removed the Fileset that was associated with the remote dlist file that was deleted (duplicati-20250127T090316Z.dlist.zip).

But there is no fileset for operation 434 in the database. Not sure if this was deleted as part of the other delete or not, but my guess is that it was never created. There is a remote volume named duplicati-20250202T125444Z.dlist.zip that was created with version 434, and the stats say:

{
    "FilesUploaded": 1,
    "FilesDownloaded": 3,
    "FilesDeleted": 1
}

Which is suggests with 1 delete, 1 uploaded, and a triplet downloaded for testing.
However, examining the RemoteOperation table and the log shows that it did indeed upload a number of files. According to the RemoteOperation table there was a total of 17 files (attempted) uploaded.

It looks like none of these files were actually uploaded as they do not show up in RemoteVolumes or any subsequent listing of the destination. Also, all 17 files are dblock files, meaning the related dindex file is never uploaded.

This seems to indicate that it is possible for there to be a situation where uploads are started but never complete. I don’t know exactly how it can happen, but I can see that it is possible to close the channel that serves new work tasks. If this happens, the process shuts down without even looking at any in-progress tasks.

This could explain the missing uploads, and perhaps also the issue reported by @ts678.

I don’t understand how or why the channel is closed as it does not do this in any of our tests. I also don’t understand how the upload of the dlist can succeed while the dblock is either failed or in-progress.

@mathdu In the UI, do you see any errors for the backup that was running on 2025-02-02T13:12:00.9864685Z (UTC)? I am suspecting that something in the shutdown logic is perhaps causing errors that are not logged.

Fortunately, I have removed all the code in question here for 2.1.0.108, but it does not fix the setup.

Also, @mathdu do you want to try to recover the backup to the state before 2025-02-02 ?