Error Detected 1 volume with missing filesets: VolumeId = 32, Name = duplicati-20251104T211541Z.dlist.zip.aes, State = Uploaded

I am currently playing with Duplicati to see if it fits my needs. So the scenario I have is only educational.

I have 2 NAS’s and wanted to backup the docker folder of NAS1 to the NAS2 using SSH. Duplicati itself is running in a docker container on NAS1.

  • Backup works well (once I set UID and GUID to `0` in the docker compose file).
  • Restore to another folder on NAS1 works well.

How ever, I then also installed the TrayIcon on my Ubuntu Desktop PC and tried to restore the backup there. This always fails with following error:

Error
Detected 1 volume with missing filesets: VolumeId = 32, Name = duplicati-20251104T211541Z.dlist.zip.aes, State = Uploaded

I would like to understand what the issue is and how to solve it.

Hello all. Also interested in understanding how this issue occurs and steps to fix. I am restoring on a separate system (for backup testing purposes) and I ran into this issue. Ran through deleting recent backups, rebuilding the database, compacting and testing without any no success (all from the backed up system).

I am getting this error after upgrading to 2.2.0.1 - 2.2.0.1_stable_2025-11-09 when restoring to another system on the same version. To attempt a fix, I deleted the database and all the files from the backup server and then ran a backup and a restore - it worked, but only once. After another backup the subsequent restore failed after getting the index files, with

The operation Restore has failed => Detected 1 volume with missing filesets: VolumeId = 26, Name = duplicati-20251111T084437Z.dlist.zip.aes, State = Uploaded

ErrorID: DatabaseInconsistency
Detected 1 volume with missing filesets: VolumeId = 26, Name = duplicati-20251111T084437Z.dlist.zip.aes, State = Uploaded

Interesting, I only tried it with an incremental backup (Version 2).

Since you say you tested it after the upgrade, did this work before with an older version?

@kenkendk Do you understand why we see this issue?

Yes. It’s been working for years before the upgrade. I’m wondering if I need to revisit the restore command - perhaps something has changed there in new version.

Last night the backup had a warning

"2025-11-13 00:23:48 +00 - [Warning-Duplicati.Library.Main.Operation.TestHandler-FaultyIndexFiles]: Found 1 faulty index files, repairing now"

but a re-run worked fine this morning, so it need to look for a pattern. A subsequent attempt to restore failed with the same error.

Extra feedback. In my scenario I deleted all but the most recent backup from the backed up system web ui. Then I rechecked that last backup from the separate machine and the rebuilt database worked. The last backup was made with the new version.

Do you have any extra options to the backup command that enabled the new update to detect the faulty index?

Neither the backup config nor the restore command changed. I only upgraded to the latest docker container.

I was wondering what is special in your backup configuration that allowed it to “heal” on the next run in order to implement it on my own backups. Thanx!

Summary of changes from 2.1 to 2.2

New faster restore flow, use --restore-legacy=true to fall back

I think it’s automatic on the random sample sets, but one can ask for test of all.

At the end of each backup job, Duplicati checks the integrity by downloading a few files from the backend.

I’m not sure if the index files fix has anything to do with original problem though.

That sounds like a possibility (test all). Thanx for the info. In the meantime I was able to replicate that a backup that failed the test started working after I deleted all but the latest backup. It may be localized to some backups - perhaps they didn’t finish properly - as I have other backup jobs that work just fine.

This fails for me with the same error. I have disabled my remote restore for a few days to see if the backup works OK without any ‘interference’ from another system.

Do the dlist files with complaints have about the right size compared to others, assuming you have some old ones. If you do, do the old versions restore OK?

The time on the dlist file name “should” match “Restore” list except dlist is UTC. Complaint seems to be saying there’s a dlist that didn’t make a restore version.

The problem sounds like people are using “Direct restore” to their other system. Alternatively, one could do an actual database recreate, but be careful with that. Restore “should” be OK, but don’t do things like backup that change destination.

is the concern if two systems collide, but usually it causes extra or missing files.
Looking forward to hearing test result.

It gets stranger.

  1. Deleted database from the source host (saturn)
  2. Deleted all aes files from the destination host directory (jupiter)
  3. Backed up from saturn to jupiter using duplicati app on saturn - Works
  4. Restored files from jupiter to odysseus (manual command on odysseus) - Files restored - no errors
  5. Repeated step 3 - no errors
  6. Repeated step 4 - failed with DatabaseInconsistency

Go there to see the rest of the context.

Thanks for you suggestion but I think probably it’s not that.

  • My restore command explicitly states the location of the backup files
  • From the docs the dbconfig.json seems to be a cache file on the source system which isn’t involved in the restore.
  • I can restore locally on the source system with no issues.
  • The remote restore succeeds on first attempt and fails as soon as another backup takes place.

Doesn’t matter. This question (which still be off-target) is DB on restore system.

Docs or no, it’s a cache only of the DB location, on whatever system you’re on.
Here’s a chunk of the one I was using to try to get your error, but got another…

  {
    "Type": "file",
    "Server": "",
    "Path": "C:\\Duplicati\\duplicati-2.2.0.1_stable_2025-11-09-win-x64-gui\\RUN\\test 1\\",
    "Prefix": "duplicati",
    "Username": null,
    "Port": -1,
    "Databasepath": "C:\\Users\\Me\\AppData\\Local\\Duplicati\\test 1.sqlite",
    "ParameterFile": null
  }

Doesn’t matter if the problem is database – they’re different.

I was trying to repro a situation where backup got things out of sync. I did so, however the error was different. The restore compared old DB to destination.

I thought maybe looking at your dbpath.json might shed some light on things.

I was just talking DB presense/absence, but if desired the DB can be opened.
DB Browser for SQLite is available directly, or often as Linux sqlitebrowser.

If nobody wants to look at things, I guess we can just wait and hope for help…

It’s unfortunate that there are two similar topics being active at the same time.
Please keep an eye on other one too. My repro attempt was described in that.

Done the same way, with CLI and no explicit database? I still worry about old DB.
Make sure the remote is doing a recreate, like I advised in the other topic already.
It should have to read through all the dlist and dindex files, showing it on terminal.

EDIT 1:

Kind of like the other topic original post showed. It didn’t help there though, but it
would still be best to know what any individual restore by anybody is really doing.

Maybe a better recipe for repro can be put together.

EDIT 2:

I suppose I can ask if the two systems have equal access to the destination files.
This doesn’t seem likely to me to be the crucial difference, but what else is there?

The problem sounds like people are using “Direct restore” to their other system. Alternatively, one could do an actual database recreate, but be careful with that. Restore “should” be OK, but don’t do things like backup that change destination.

Well, being able to restore to a different system is the whole point I run this test. Eg. what if the hardware failed, the restore then also would be in a new, different hardware. It is obvious to me that a backup only should be triggered from one location.

I suppose I can ask if the two systems have equal access to the destination files.

I exported the config file (NAS backup) and did the restore on a different system (Ubuntu) using that config file. So I expect the backup access should be identical.

The remote restore succeeds on first attempt and fails as soon as another backup takes place.

I just did another test using version 2.2.0.0:

  1. I configured a new backup on my NAS1 to NAS2 (backup schema: Smart) and exported the config
  2. Created the first backup
  3. Then I restored it (version 0) on my Ubuntu system → No errors
  4. Then I did a 2nd backup (version 1)
  5. Then I tried to restore it (version 1) on my Ubuntu system → error (missing fileset). Not a single file got restored at all!
  6. Then I tried to restore version 0’ once more on my Ubuntu system → restored without issues!

So indeed restoring onto a different system seems only to work for the first version (I did not check if the same issue applies to the original system, NAS1 in my case).