Backup started failing "non-empty blocksets with no associated blocks"


Backup fail to start with error:
System.AggregateException: Detected non-empty blocksets with no associated blocks!

Not long ago I rebuild the database because of some errors. I am using

Should I rebuild again?

Fatal error: Detected non-empty blocksets with no associated blocks

This error is coming out of database VerifyConsistency step, but I’m not exactly sure how we’d get into this situation in the first place.

I’d suggest a Repair first (just because it’s likely faster) to see if that resolves it and if not then try another rebuild.


I just had the same error.

  • My backup was working great. It included a lot of directories, including “C:\user” and all subdirectories.
  • I deleted some source files from the backup set (via the ‘edit’ option). Specifically, I deleted a cache directory from deep within the windows c:\USER…\appdata\mozilla… tree (as it was just continually backing up new cache files pointlessly)
  • On the very next (and every subsequent) backup, I get the above error.
  • I tried editing the job again, and the strange part is that the changes I made to the directory selections didn’t seem to ‘stick’. Deleting the whole C:\user directory didn’t help, and adding it back didn’t help either.

It may be pure co-incidence, but my gut tells me that deleting directories from the source list is related to the error.

Fatal error: Detected non-empty blocksets with no associated blocks

Same issue here. I just did reset several terabytes of backups about two months ago due this error. Now it appeared again. Yet, after updating to latest duplicati there was positive development. Running repair actually fixed the issue. So I didn’t need to reset the backup set(s).

That’s great advancement that repair works. Even better would be that it wouldn’t need the manual running of repair. Automatic error detection and recovery is what I’m always looking for, if possible.

  • Thank you

Fatal error: Detected non-empty blocksets with no associated blocks

What does ‘stick’ mean? Changes only affect future backups, unlike on some other backup programs where deselection purges data. The PURGE command is a Duplicati do-it-yourself (for example with Commandline).

Was this trying to help the ‘stick’ or the error? Either way it may be because it doesn’t change existing things. The DELETE command is a way to delete entire versions of the file view. In the job, retention sets up deletes.

Any more test clues, anyone? I tested unchecking a source directory that had been backed up. All still works. There’s also a theory (pointing to this article actually) that privilege changes are involved in producing this…


@ts678 I am guessing that @T_C means deleting from the source list, not deleting from disk.


Agreed, and that’s what I translated into “deselection”, meaning take what was checked and click to uncheck. After that, things got less clear to me. I don’t think I wandered into on-disk deletions from source (don’t do it), however deletions from destination backup files (and their representation in the database) are key to the bug because it’s complaining about seeing files with non-zero lengths where it can’t find any blocks for the files… Version deletion can eventually take care of that (or sooner, if requested) by erasing memories of those files.

Fatal error: Detected non-empty blocksets with no associated blocks

I am also experiencing this problem. I have 2 backup tasks, one that is working perfectly fine. The other one is about 4x the size on disk and could never successfully finish because of this error. I have tried to run it twice, deleting all files on the remote between attempts. And both times it has failed with this error. As does it now, when I try to start it manually. I have tried database repair, which performed some deletions and completed successfully, but the error still prevents me from backing up this task.


If you still have the db (.sqlite file) can you perform the steps listed in this post. They’re pretty detailed, including downloading an sql browser program but it would really help characterizing the problem.


I have tried to follow your steps.

Running the SQL statement also yields one result for me.

When searching in the File table with id as BlocksetId I get no result as well.



Good to have that results.

Could you check the BlocksetIDs in the File table within say 10 of the value of the missing BlocksetID? For the 4 people who’ve done this, all the nearby BlocksetIDs have been temp file type files. File that change often or might be locked.


Alright, I checked and for me that is not the case. The surrounding files are normal files (video & image mainly), but I can see the file that is probably missing. All the other files in the folder structure that are nearby are also represented by entries in the file table with BlocksetIDs that are close by. However, the missing file is a simple AVI file.
My Backup is of some personal media files, so there shouldn’t even be any temp files in any of the folders I’m backing up.



Let me clarify: the file that’s missing is actually still existing on disk?

I did not expect that.

And have these files been changing? created, deleted, moved, renamed, or even just locked recently (eg, during Duplicati run)? Eg same file re-written, or locked for editing?

Any changes to filters that might exclude this particular missing file? I’m grasping for anything that might give Duplicati an inconsistent view of the file system from one time to the next. If this file was just static, then there’s just a straight-out bug in Duplicati (which there could be).


I now deleted the backup and ran it again from scratch, because I want it to actually back up my data.
And it has happened again, the same error, but now it seemingly happened with a different file, though still in the same directory.

I have a small series with 5 episodes in that folder and 4 of the episodes are present in the File table in the database, but episode 4 is missing in the database, but clearly present on the disk. The file in question is a video file that plays without problems, this time an MKV, though I doubt file type makes a difference here.

The files haven’t been changing at all. The only program that should have accessed this directory is duplicati itself. I did have to forcefully shutdown duplicati, but only while editing a config and not while it was backing up.

I am using the duplicati docker from linuxserver (GitHub - linuxserver/docker-duplicati) on unRAID. Myself, I haven’t added any filters, but I don’t know if any are present by default.

What is curious is that it failed in the exact same folder again, but with a different file missing. This probably implies that the file is not the problem, but something else is causing the issue.
The source of the backup is roughly 410GB in size, but I don’t think that is relevant as I have successfully backed up more data in backup task on another machine.