Backup started failing "non-empty blocksets with no associated blocks"

Backup fail to start with error:
System.AggregateException: Detected non-empty blocksets with no associated blocks!

Not long ago I rebuild the database because of some errors. I am using

Should I rebuild again?

This error is coming out of database VerifyConsistency step, but I’m not exactly sure how we’d get into this situation in the first place.

I’d suggest a Repair first (just because it’s likely faster) to see if that resolves it and if not then try another rebuild.

I just had the same error.

  • My backup was working great. It included a lot of directories, including “C:\user” and all subdirectories.
  • I deleted some source files from the backup set (via the ‘edit’ option). Specifically, I deleted a cache directory from deep within the windows c:\USER…\appdata\mozilla… tree (as it was just continually backing up new cache files pointlessly)
  • On the very next (and every subsequent) backup, I get the above error.
  • I tried editing the job again, and the strange part is that the changes I made to the directory selections didn’t seem to ‘stick’. Deleting the whole C:\user directory didn’t help, and adding it back didn’t help either.

It may be pure co-incidence, but my gut tells me that deleting directories from the source list is related to the error.

Same issue here. I just did reset several terabytes of backups about two months ago due this error. Now it appeared again. Yet, after updating to latest duplicati there was positive development. Running repair actually fixed the issue. So I didn’t need to reset the backup set(s).

That’s great advancement that repair works. Even better would be that it wouldn’t need the manual running of repair. Automatic error detection and recovery is what I’m always looking for, if possible.

  • Thank you
1 Like

What does ‘stick’ mean? Changes only affect future backups, unlike on some other backup programs where deselection purges data. The PURGE command is a Duplicati do-it-yourself (for example with Commandline).

Was this trying to help the ‘stick’ or the error? Either way it may be because it doesn’t change existing things. The DELETE command is a way to delete entire versions of the file view. In the job, retention sets up deletes.

Any more test clues, anyone? I tested unchecking a source directory that had been backed up. All still works. There’s also a theory (pointing to this article actually) that privilege changes are involved in producing this…

@ts678 I am guessing that @T_C means deleting from the source list, not deleting from disk.

Agreed, and that’s what I translated into “deselection”, meaning take what was checked and click to uncheck. After that, things got less clear to me. I don’t think I wandered into on-disk deletions from source (don’t do it), however deletions from destination backup files (and their representation in the database) are key to the bug because it’s complaining about seeing files with non-zero lengths where it can’t find any blocks for the files… Version deletion can eventually take care of that (or sooner, if requested) by erasing memories of those files.

I am also experiencing this problem. I have 2 backup tasks, one that is working perfectly fine. The other one is about 4x the size on disk and could never successfully finish because of this error. I have tried to run it twice, deleting all files on the remote between attempts. And both times it has failed with this error. As does it now, when I try to start it manually. I have tried database repair, which performed some deletions and completed successfully, but the error still prevents me from backing up this task.

If you still have the db (.sqlite file) can you perform the steps listed in this post. They’re pretty detailed, including downloading an sql browser program but it would really help characterizing the problem.

I have tried to follow your steps.

Running the SQL statement also yields one result for me.

When searching in the File table with id as BlocksetId I get no result as well.


Good to have that results.

Could you check the BlocksetIDs in the File table within say 10 of the value of the missing BlocksetID? For the 4 people who’ve done this, all the nearby BlocksetIDs have been temp file type files. File that change often or might be locked.

Alright, I checked and for me that is not the case. The surrounding files are normal files (video & image mainly), but I can see the file that is probably missing. All the other files in the folder structure that are nearby are also represented by entries in the file table with BlocksetIDs that are close by. However, the missing file is a simple AVI file.
My Backup is of some personal media files, so there shouldn’t even be any temp files in any of the folders I’m backing up.


Let me clarify: the file that’s missing is actually still existing on disk?

I did not expect that.

And have these files been changing? created, deleted, moved, renamed, or even just locked recently (eg, during Duplicati run)? Eg same file re-written, or locked for editing?

Any changes to filters that might exclude this particular missing file? I’m grasping for anything that might give Duplicati an inconsistent view of the file system from one time to the next. If this file was just static, then there’s just a straight-out bug in Duplicati (which there could be).

I now deleted the backup and ran it again from scratch, because I want it to actually back up my data.
And it has happened again, the same error, but now it seemingly happened with a different file, though still in the same directory.

I have a small series with 5 episodes in that folder and 4 of the episodes are present in the File table in the database, but episode 4 is missing in the database, but clearly present on the disk. The file in question is a video file that plays without problems, this time an MKV, though I doubt file type makes a difference here.

The files haven’t been changing at all. The only program that should have accessed this directory is duplicati itself. I did have to forcefully shutdown duplicati, but only while editing a config and not while it was backing up.

I am using the duplicati docker from linuxserver (GitHub - linuxserver/docker-duplicati) on unRAID. Myself, I haven’t added any filters, but I don’t know if any are present by default.

What is curious is that it failed in the exact same folder again, but with a different file missing. This probably implies that the file is not the problem, but something else is causing the issue.
The source of the backup is roughly 410GB in size, but I don’t think that is relevant as I have successfully backed up more data in backup task on another machine.

1 Like

I have been having this problem, consistently, trying to complete my initial backup. I’ve trimmed my 18TiB/3m files down to 200GiB/90k files with filters, and still get this error when it gets close to 100%. I’m just about at my wit’s end with this software.

I also get “Detected non-empty blocksets with no associated blocks!” error.
Using Windows 7, Duplicati -
Remote is Debian 8 SFTP.

Got the same error on first backup of new set.

What happened was that at the end of initial backup it said -50680 bytes at 3.5Mb/s and stayed so for a hour. I had to restart service to get anywhere and then this error started occurring when I resumed backup.

No repair or rebuild helped. Now I will just export config, delete that backup set everywhere and do it freshly with imported configuration, but maybe problem with negative disk size will enlight the issue a little.

Directory being backuped could get some new files / removed files between duplicati counting them all and actually starting backup. Also entirely first backup attempt failed miserably as duplicati user indeed doesn’t have access to private files. I had to restart whole service to run under root and enable privilege restore option on all sets.

Debug error:
System.Exception: Detected non-empty blocksets with no associated blocks!
a következő helyen: Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction)
a következő helyen: Duplicati.Library.Main.Operation.Backup.BackupDatabase.<>c__DisplayClass33_0.b__0()
a következő helyen: Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass3_0.b__0()
a következő helyen: Duplicati.Library.Main.Operation.Common.SingleRunner.d__21.MoveNext() --- Verem vége nyomkövetés a kivétel előző előfordulási helyétől kezdve --- a következő helyen: System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() a következő helyen: System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) a következő helyen: Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__19.MoveNext() --- Verem vége nyomkövetés a kivétel előző előfordulási helyétől kezdve --- a következő helyen: System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() a következő helyen: CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task) a következő helyen: Duplicati.Library.Main.Controller.<>c__DisplayClass13_0.<Backup>b__0(BackupResults result) a következő helyen: Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action1 method)
a következő helyen: Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter)
a következő helyen: Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

I’m also having this issue. Is there anything that we can do to repair this, short of exporting the config, deleting the files and the config, and remaking the backup?

There is a report above (and others elsewhere) that Repair worked, but some other reports that it did not. Both Repair and Recreate are being redesigned and rewritten, but I don’t know timetables or functionality.

Fatal error: Detected non-empty blocksets with no associated blocks gave some more technical ideas… Basically, if you can find a pathname that’s causing trouble, maybe you can just purge the problem away, depending on how valuable the backup of the particular file is, weighed against alternatives like Recreate.

If nothing else, would you be willing to post a database bug report as mentioned there? Nobody ever has.

Also, can you look at the job log just before the failure to look for “DeleteResults” and “CompactResults” looking like they did anything on that run? Also, look for any other unusual warnings or errors, or just post the whole thing if there’s nothing sensitive in the messages. Put three tildes above and below to format it.