Fatal error: Detected non-empty blocksets with no associated blocks

@thomasp Did you have your logfile on during the 2nd, incremental that errored? Anything in there useful?

That OneDrive folder it complains about is hard to explain. You are still running with --snapshot-policy-on? Still running as a service? On both the initial and subsequent incremental runs? Then access to OneDrive should work. It looks like it works first time and not subsequent.

Notice the full path implies VSS is on (has VolumeShadowCopy in its name):


Why can’t it access this file the 2nd time?

I also note that when you ran the database query on your previous db you did not get files in OneDrive but rather temp files. Also confusing.

Also I noted this in your log: System.IO.PathTooLongException. I don’t see a hint as to what that path was that was too long. Is there anything in the logfile near this?

EDIT: this appears to be a known bug with --usn-policy. See this post

@kaltiz privs can still be a backdoor issue if the files you’re backing up are open and/or changing during the backup. Duplicati provides advanced option --snapshot-policy=on to access open files and freeze them, but you need privs on windows and linux to use it.

Both @kaltiz and @thomasp(again), also, can you perform the steps listed in reply 3 and elaborated on in reply 6 and post your results?

Today fixed yet again one backup with terabytes of data, which had this error with repair. Yet the repair is as uninformative as it can be. It says “listing remote folder” and that’s all. No indication whatsoever if something and was fixed or not and so on. But after this, Backup works again. I think this is fourth time this same set fails (on local disks) with the same reason. Systemic failure. ( beta 2018-11-28)

I’m beginning to lose overview.
I didn’t have the time to watch duplicati very closely for the last days. It complained quite often about the PathTooLong-Exception but didn’t tell me which path was too long. The Restore GUI Option always offered to restore the initial backup and the last one but none in between. The backup job itself said there had never been a successful backup.

Today I (not really knowing what I was doing) clicked on “create error report” (or whatever it is labeled in english…) and Duplicati worked for about half an hour. Afterwards, I tried another incremental backup and - it worked! And it says it was successful…

Let’s see how long this is going to work.

I have the log-to-file-option enabled, but the logfile is growing and growing. Unfortunately, my computer is a notebook and I have it connected to different WLANs all day, so there are quite a lot of warning messages that the backblaze cloud (my backup target) is unreachable.
If you’re willing to analyze the logfile, I could send it to you (I don’t think it contains private data). I don’t see any new relevant information in it.

Ran into the same issue.

  • Added a new local path to an existing backup through the browser UI.
  • Started backup from the command line.
  • Upload got stuck with no network activity (a cellular connection), so I Ctrl-C’ed it.
  • A subsequent command line backup reported “remote files that are not recorded in local storage”.
  • Ran a database repair through the browser UI.
  • A subsequent command line backup reported “Abort due to constraint violation” and “non-empty blocksets”.
  • Another database repair through the browser UI was successful, but did not fix the problem.
  • A database rebuild through the browser UI was successful.
  • A subsequent command line backup was successful.

Ubuntu 18.10 (cosmic)
4.18.0-13-generic x86_64
Backblaze B2

A log below.

Backup started at 1/22/2019 11:44:52 PM
Checking remote backup …
Listing remote folder …
Scanning local files …
2406 files need to be examined (255.30 MB) (still counting)

Uploading file (49.93 MB) …
6279 files need to be examined (82.16 MB) (still counting)

37346 files need to be examined (1,007.73 MB) (still counting)
39677 files need to be examined (1.04 GB)
Uploading file (52.31 KB) …

Backup started at 1/23/2019 12:16:36 AM
Checking remote backup …
Listing remote folder …
Extra unknown file: duplicati-i3387dfa62f9d47718c172f1a18f70e32.dindex.zip.aes
Found 1 remote files that are not recorded in local storage, please run repair
Fatal error => Found 1 remote files that are not recorded in local storage, please run repair

ErrorID: ExtraRemoteFiles
Found 1 remote files that are not recorded in local storage, please run repair

Backup started at 1/23/2019 12:17:20 AM
Checking remote backup …
Listing remote folder …
Scanning local files …
3085 files need to be examined (257.25 MB) (still counting)
Uploading file (17.98 KB) …
Failed to process path: /mnt/path/file.7z => Abort due to constraint violation
UNIQUE constraint failed: BlocklistHash.BlocksetID, BlocklistHash.Index
5569 files need to be examined (66.55 MB) (still counting)

38604 files need to be examined (1,007.89 MB) (still counting)
39677 files need to be examined (1.04 GB)
Uploading file (49.96 MB) …

Uploading file (49.92 MB) …
Fatal error => Detected non-empty blocksets with no associated blocks!
0 files need to be examined (0 bytes)

System.Exception: Detected non-empty blocksets with no associated blocks!
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x0017e] in :0
at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass32_0.b__0 () [0x00000] in :0
at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass3_0.b__0 () [0x00000] in :0
at Duplicati.Library.Main.Operation.Common.SingleRunner+d__21[T].MoveNext () [0x000b0] in <c6c6871f516b48f59d88f9d731c3ea4d>:0 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <8f2c484307284b51944a1a13a14c0266>:0 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0004e] in <8f2c484307284b51944a1a13a14c0266>:0 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x0002e] in <8f2c484307284b51944a1a13a14c0266>:0 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x0000b] in <8f2c484307284b51944a1a13a14c0266>:0 at System.Runtime.CompilerServices.TaskAwaiter.GetResult () [0x00000] in <8f2c484307284b51944a1a13a14c0266>:0 at Duplicati.Library.Main.Operation.BackupHandler+<RunAsync>d__19.MoveNext () [0x00b1f] in <c6c6871f516b48f59d88f9d731c3ea4d>:0 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <8f2c484307284b51944a1a13a14c0266>:0 at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <6973ce2780de4b28aaa2c5ffc59993b1>:0 at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x00008] in <c6c6871f516b48f59d88f9d731c3ea4d>:0 at Duplicati.Library.Main.Controller+<>c__DisplayClass13_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x00035] in <c6c6871f516b48f59d88f9d731c3ea4d>:0 at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action1[T] method) [0x0011d] in :0

1 Like

I just looked at my backup database which was having this issue with sqlitebrowser. This issue appears to be the same as others reporting – there were 2 entries in Blockset which were not in the File table. One of the nearby entries was for the Temp directory.

I will note that on my previous computer, which didn’t have this issue, I was running a version of Duplicati from earlier in 2018 which had a default “Windows” exclude option, which was checked. When I installed the latest version of Duplicati on this computer which is giving me issues, there was no such option for a default “Windows” exclude option. I’m wondering if the default exclude list in the older version of Duplicati was excluding the Temp directory? Why was that option removed from Duplicati?

I created an exclude filter to exclude the Temp directory and some other paths. I’m having Duplicati recreate the database right now and I’ll see if this fixes the problem or if it comes back.

What --log-file-log-level did you use? I suggest verbose.

If verbose is too much, try Information

What would be helpful (but it’s a lot of steps) is

  1. Starting with a working backup job
  2. Turn on --log-file-log-level (as you have)
  3. experience non-empty blocksets error (well, I hope you don’t, I hope no one does, but if you do)
  4. perform sql query in database browser
  5. Find 5 of 10 files “around” where the failing file was
  6. go back to logfile and search for those files. This is likely from previous runs, not the current that received the error.
  7. hopefully something in the logfile will be revealing

@thomasp your PathTooLongException appears to be a known bug with --usn-policy. See this post.

I updated my reply #16 with an edit to this effect.

As a followup, I ended up adding filters to prevent Duplicati from trying to backup both the temp directory as well as the directory where Duplicati’s database was being kept, along with a few other directories containing some Windows files that were constantly changing and didn’t need to be backed up. Then I recreated the database. It’s been running for somewhere around a week and a half without a problem. Seeing as how this error previously kept being generated within a day or two of recreating the database, and that happened several times, I feel pretty confident that these exclude filters are what is fixing it for me and should point the developers toward fixing it in the code.

Interestingly, as soon as I turned on the most verbose log level for a job where this had happened, this error has stopped happening to me. So, either I got lucky so far, or an update has fixed it?

1 Like

Are you using OneDrive backend?

I have two backups to two different OneDrive accounts running for more than a year. 15 days ago, one of them failed with “Detected non-empty blocksets” error. Repair database don’t change anything, and recreate database (it took one week) turns the error to “You have attempted to change the block-size on an existing backup”.
I’ve deleted the backup (bye bye 6 months of backup history) and recreated the task and now I’m getting “Object reference not set to an instance of an object” in different files everytime I try to run the task.

The other backup task is working right all the time so I suspect the OneDrive account. I had other bad experiences in the past with OneDrive. Microsoft guys changes something or something in some servers gets unstable and stops working (and long after they fix it and works again)

I’m using local disk backup, and just today had this problem once again. It just randomly hits you. After running manual repair, backups work again.

It seems to me that Duplicati isn’t very robust to failures of the program. While I was backing up for the first time (testing this out) my computer’s battery died and so the whole machine shut off. Also, after that I had to force quit Duplicati because it wouldn’t pause. Backup software like this should be able to recover from situations like that, but I’m not sure Duplicati is robust enough on those fronts yet. After those things happened, Duplicati gave me errors about a lot of things, but kept going and seemed to maybe be working? That is until I go to restore and it gives me this “non-empty blocksets” error. After attempting to restore and getting that error, I no longer see any directories when I go to restore. This isn’t very encouraging : /

Ugh, and if you rename the directory where the backup is stored, it won’t remove the backup from your home screen! It really seems like a lot more errors need to be gracefully handled. Hmm, after restarting the duplicati service, it seems to have gotten the picture and not display the backup.

1 Like

Without much technical knowledge, I may add the following experience, which occured by chance:
When Duplicati showed this error “Detected non-empty blocksets with no associated blocks”, and repair and other suggestions I’ve found here didn’t make the error go away, I finally choose to recreate the back-up. To do that, I have created a new backup from the config file of the old error-prone backup, renamed it, but - forgot to change the destination folder!
I ran it, and the above mentioned error did not reappear. I hope this new backup does what it is expected to do.
Running Duplicati 2 as a server on Win10, backup to external HD. Hope this helps.

My question: Can I trust this backup, since it still gives me a few strange errors?

2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.BackupHandler-SnapshotFailed]: Fehler beim Erstellen des Snapshots: System.UnauthorizedAccessException: Es wurde versucht, einen nicht autorisierten Vorgang auszuführen.
bei Alphaleonis.Win32.Vss.VssBackupComponents…ctor()
bei Alphaleonis.Win32.Vss.VssImplementation.CreateVssBackupComponents()
bei Duplicati.Library.Snapshots.WindowsSnapshot…ctor(IEnumerable1 sources, IDictionary2 options)
bei Duplicati.Library.Snapshots.SnapshotUtility.CreateWindowsSnapshot(IEnumerable1 folders, Dictionary2 options)
bei Duplicati.Library.Main.Operation.BackupHandler.GetSnapshot(String sources, Options options),
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\Hannah,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\Hannah,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\DefaultAppPool,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\DefaultAppPool, … ]
[ 2019-04-30 06:48:30 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: 6y9axTuPuScijYqpJYoSIrWjr4oCT8anZyaWz45/PCw=, size: 102400, blocksetid: 2323039, ix: 15, fullhash: +9taBJbTJQPh6PnJfFf8LwBNUwtcmipp+dAh1VRZXLM=, fullsize: 1770112,
2019-04-30 06:48:30 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 7194036 and hash 6y9axTuPuScijYqpJYoSIrWjr4oCT8anZyaWz45/PCw= and size 28136,
2019-04-30 07:33:32 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: RchjnSGyxBpoBYVUF+eTOJs01I7NwR+px7xJ6/OHNsc=, size: 102400, blocksetid: 2323154, ix: 4, fullhash: et74+L9UlZs0anjdoQfPQdLus4M60e7CcV1TdNlF44Y=, fullsize: 2504664,
2019-04-30 07:33:32 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 7199469 and hash RchjnSGyxBpoBYVUF+eTOJs01I7NwR+px7xJ6/OHNsc= and size 21832 ]

My question would be: Do these errors compromise the backup?

I just received this error last night. The backup client is running Linux. The backup server is a nextcloud instance using webdav on a Linux system. I do not have snapshots turned on for this backup system.

The result of the first query is

sqlite> SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry);

The result of checking for that in the Files table is no result

sqlite> SELECT * FROM File WHERE BlocksetID = 2359268;

I’ve copied the database off to the side so that I can run any other queries on it that will help to debug this issue.

I’ve started a database repair and will see if this works. Otherwise it’ll be a few days to create a new backup.

I had this error a few months ago. I remember retrying the same backup over and over with fresh installs of duplicati. It keeps failing. Trying to repair? Has to download and unzip everything. Fell back to previous beta. Can’t believe the devs haven’t fixed yet…

The key to allow a fix is a good test case that reproduces it, and (from skimming here), I think this is still a random thing. If anybody on the thread has found a good reliable way to get it, or clue as to what causes it, please point. I’m just skimming the rather long thread, but not seeing much that would help the developers.

Other things that may help are a database bug report as mentioned earlier, or logs at as detailed a level as one can stand, also mentioned earlier. Since it’s been many posts back, that means Creating a bug report and/or –log-file with –log-file-log-level=Profiling plus --profile-all-database-queries (it will create a huge log).

Maybe easier is to let development do the heavy debugging, provided someone who hits it on a somewhat regular basis can simplify it into steps, so development can create it for debug. Filing an issue on it is best.


I am having the same issue since some days. I read some posts, tried a repair and even a recreate of the database. Nothing helped, luckily the other backup job runs smoothly, which is the most important one.

Therefor I am offering my help to get debug logs or any information needed to get an idea why this is happening. This, and other threads, are quite long and offer lots of ideas and tasks to do, I am asking someone for help to guide me in what information is really needed to get this fixed.

The backup I am talking about holds no personal nor valuable data, and it is only 16GB in size, which makes recreating it now a big deal.

If someone could help me gathering the right information and placing it where it is needed - I am the one to do the work at the end.


I think someone should place a guard in Streamblocksplitter.cs for the hashcollector. It’s possibly empty and so there can’t be any blocksetentrys. A check before the final insert in LocalBackupDatabase:AddBlockset would also be nice. I bet in those cases there is no entry in the specific foreach-loop. The easiest explanation at first glance would be a closed stream in the while-loop …but this can’t explain a generated “filehash” for the blockset.

Link to my comment to this issue on github: Locked files seems to cause corrupt backup · Issue #3594 · duplicati/duplicati · GitHub


I had the same problem in a backup set.
I found a query in the log: SELECT * FROM “Blockset” WHERE “Length” > 0 AND “ID” NOT IN (SELECT “BlocksetId” FROM “BlocksetEntry”);
Only one row returned for me with the SQL Browser. I found a record in the BlocklistHash table (BlocksetID) so I deleted these 1-1 records from the BlocklistHash and BlockSet tables after a db backup.
And now my job is running again.


1 Like