Fatal error: Detected non-empty blocksets with no associated blocks

What --log-file-log-level did you use? I suggest verbose.

If verbose is too much, try Information

What would be helpful (but it’s a lot of steps) is

  1. Starting with a working backup job
  2. Turn on --log-file-log-level (as you have)
  3. experience non-empty blocksets error (well, I hope you don’t, I hope no one does, but if you do)
  4. perform sql query in database browser
  5. Find 5 of 10 files “around” where the failing file was
  6. go back to logfile and search for those files. This is likely from previous runs, not the current that received the error.
  7. hopefully something in the logfile will be revealing

@thomasp your PathTooLongException appears to be a known bug with --usn-policy. See this post.

I updated my reply #16 with an edit to this effect.

As a followup, I ended up adding filters to prevent Duplicati from trying to backup both the temp directory as well as the directory where Duplicati’s database was being kept, along with a few other directories containing some Windows files that were constantly changing and didn’t need to be backed up. Then I recreated the database. It’s been running for somewhere around a week and a half without a problem. Seeing as how this error previously kept being generated within a day or two of recreating the database, and that happened several times, I feel pretty confident that these exclude filters are what is fixing it for me and should point the developers toward fixing it in the code.

1 Like

Interestingly, as soon as I turned on the most verbose log level for a job where this had happened, this error has stopped happening to me. So, either I got lucky so far, or an update has fixed it?

1 Like

Are you using OneDrive backend?

I have two backups to two different OneDrive accounts running for more than a year. 15 days ago, one of them failed with “Detected non-empty blocksets” error. Repair database don’t change anything, and recreate database (it took one week) turns the error to “You have attempted to change the block-size on an existing backup”.
I’ve deleted the backup (bye bye 6 months of backup history) and recreated the task and now I’m getting “Object reference not set to an instance of an object” in different files everytime I try to run the task.

The other backup task is working right all the time so I suspect the OneDrive account. I had other bad experiences in the past with OneDrive. Microsoft guys changes something or something in some servers gets unstable and stops working (and long after they fix it and works again)

I’m using local disk backup, and just today had this problem once again. It just randomly hits you. After running manual repair, backups work again.

It seems to me that Duplicati isn’t very robust to failures of the program. While I was backing up for the first time (testing this out) my computer’s battery died and so the whole machine shut off. Also, after that I had to force quit Duplicati because it wouldn’t pause. Backup software like this should be able to recover from situations like that, but I’m not sure Duplicati is robust enough on those fronts yet. After those things happened, Duplicati gave me errors about a lot of things, but kept going and seemed to maybe be working? That is until I go to restore and it gives me this “non-empty blocksets” error. After attempting to restore and getting that error, I no longer see any directories when I go to restore. This isn’t very encouraging : /

Ugh, and if you rename the directory where the backup is stored, it won’t remove the backup from your home screen! It really seems like a lot more errors need to be gracefully handled. Hmm, after restarting the duplicati service, it seems to have gotten the picture and not display the backup.

1 Like

Without much technical knowledge, I may add the following experience, which occured by chance:
When Duplicati showed this error “Detected non-empty blocksets with no associated blocks”, and repair and other suggestions I’ve found here didn’t make the error go away, I finally choose to recreate the back-up. To do that, I have created a new backup from the config file of the old error-prone backup, renamed it, but - forgot to change the destination folder!
I ran it, and the above mentioned error did not reappear. I hope this new backup does what it is expected to do.
Running Duplicati 2 as a server on Win10, backup to external HD. Hope this helps.

My question: Can I trust this backup, since it still gives me a few strange errors?

2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.BackupHandler-SnapshotFailed]: Fehler beim Erstellen des Snapshots: System.UnauthorizedAccessException: Es wurde versucht, einen nicht autorisierten Vorgang auszuführen.
bei Alphaleonis.Win32.Vss.VssBackupComponents…ctor()
bei Alphaleonis.Win32.Vss.VssImplementation.CreateVssBackupComponents()
bei Duplicati.Library.Snapshots.WindowsSnapshot…ctor(IEnumerable1 sources, IDictionary2 options)
bei Duplicati.Library.Snapshots.SnapshotUtility.CreateWindowsSnapshot(IEnumerable1 folders, Dictionary2 options)
bei Duplicati.Library.Main.Operation.BackupHandler.GetSnapshot(String sources, Options options),
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\Hannah,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\Hannah,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\DefaultAppPool,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\DefaultAppPool, … ]
Errors:
[ 2019-04-30 06:48:30 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: 6y9axTuPuScijYqpJYoSIrWjr4oCT8anZyaWz45/PCw=, size: 102400, blocksetid: 2323039, ix: 15, fullhash: +9taBJbTJQPh6PnJfFf8LwBNUwtcmipp+dAh1VRZXLM=, fullsize: 1770112,
2019-04-30 06:48:30 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 7194036 and hash 6y9axTuPuScijYqpJYoSIrWjr4oCT8anZyaWz45/PCw= and size 28136,
2019-04-30 07:33:32 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: RchjnSGyxBpoBYVUF+eTOJs01I7NwR+px7xJ6/OHNsc=, size: 102400, blocksetid: 2323154, ix: 4, fullhash: et74+L9UlZs0anjdoQfPQdLus4M60e7CcV1TdNlF44Y=, fullsize: 2504664,
2019-04-30 07:33:32 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 7199469 and hash RchjnSGyxBpoBYVUF+eTOJs01I7NwR+px7xJ6/OHNsc= and size 21832 ]

My question would be: Do these errors compromise the backup?

I just received this error last night. The backup client is running Linux. The backup server is a nextcloud instance using webdav on a Linux system. I do not have snapshots turned on for this backup system.

The result of the first query is

sqlite> SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry);
ID|Length|FullHash
2359268|7804|gR5EPkHAuZLY+iQuMY0YX6tW36r3r/SAYq+K/Pr3+Fg=

The result of checking for that in the Files table is no result

sqlite> SELECT * FROM File WHERE BlocksetID = 2359268;

I’ve copied the database off to the side so that I can run any other queries on it that will help to debug this issue.

I’ve started a database repair and will see if this works. Otherwise it’ll be a few days to create a new backup.

I had this error a few months ago. I remember retrying the same backup over and over with fresh installs of duplicati. It keeps failing. Trying to repair? Has to download and unzip everything. Fell back to previous beta. Can’t believe the devs haven’t fixed yet…

The key to allow a fix is a good test case that reproduces it, and (from skimming here), I think this is still a random thing. If anybody on the thread has found a good reliable way to get it, or clue as to what causes it, please point. I’m just skimming the rather long thread, but not seeing much that would help the developers.

Other things that may help are a database bug report as mentioned earlier, or logs at as detailed a level as one can stand, also mentioned earlier. Since it’s been many posts back, that means Creating a bug report and/or –log-file with –log-file-log-level=Profiling plus --profile-all-database-queries (it will create a huge log).

Maybe easier is to let development do the heavy debugging, provided someone who hits it on a somewhat regular basis can simplify it into steps, so development can create it for debug. Filing an issue on it is best.

Hi,

I am having the same issue since some days. I read some posts, tried a repair and even a recreate of the database. Nothing helped, luckily the other backup job runs smoothly, which is the most important one.

Therefor I am offering my help to get debug logs or any information needed to get an idea why this is happening. This, and other threads, are quite long and offer lots of ideas and tasks to do, I am asking someone for help to guide me in what information is really needed to get this fixed.

The backup I am talking about holds no personal nor valuable data, and it is only 16GB in size, which makes recreating it now a big deal.

If someone could help me gathering the right information and placing it where it is needed - I am the one to do the work at the end.

Thanks

I think someone should place a guard in Streamblocksplitter.cs for the hashcollector. It’s possibly empty and so there can’t be any blocksetentrys. A check before the final insert in LocalBackupDatabase:AddBlockset would also be nice. I bet in those cases there is no entry in the specific foreach-loop. The easiest explanation at first glance would be a closed stream in the while-loop …but this can’t explain a generated “filehash” for the blockset.

Link to my comment to this issue on github: Locked files seems to cause corrupt backup · Issue #3594 · duplicati/duplicati · GitHub

Hi,

I had the same problem in a backup set.
I found a query in the log: SELECT * FROM “Blockset” WHERE “Length” > 0 AND “ID” NOT IN (SELECT “BlocksetId” FROM “BlocksetEntry”);
Only one row returned for me with the SQL Browser. I found a record in the BlocklistHash table (BlocksetID) so I deleted these 1-1 records from the BlocklistHash and BlockSet tables after a db backup.
And now my job is running again.

Gabor

1 Like

I’m basically going to say the same thing that gabor just did, but just reporting another case, and how I fixed it. I had this same issue happening on my macOS machine.

(And the obvious warnings here before anyone proceeds. I’m not a SQL expert. I’m not a Duplicati expert. Make sure you make a backup of your database before doing anything.)

  1. On a mac, the database file is in: ~/.config/Duplicati/
    It’s the long string of numbers ending in “.sqlite”. It will probably be a pretty big file.

  2. I made a copy of the sqlite file as backup.

  3. I then opened the sqlite file using https://sqlitebrowser.org/. (Or really whatever sql program you want.)

  4. I ran the query as suggested by johnvk:

SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry)

  1. That found exactly one entry in my database in the Blockset table. I deleted that entry.

  2. In duplicati, I then repaired the database (don’t know if this was necessary or not).

  3. Then the backup worked! Huzzah.

3 Likes

I also get “Detected non-empty blocksets with no associated blocks!” error.
Using Windows 7, Duplicati - 2.0.4.22_canary_2019-06-30.
Remote is Debian 8 SFTP.

I just ran into this error myself. Win10, Duplicati as a service, 2.0.4.5_beta_2018-11-28.
Remote is to a mapped network drive on my Asus router
Backups have been running fine for a while, otherwise (as far as I know - basically, I didn’t see errors).

The last backup was on Friday last week, and this just started happening after I returned from a business trip, so there was no tinkering involved along the way. I tried repair, and it didn’t find anything wrong. I’m currently in the middle of a “recreate database” which is taking a very long time (been running for about 12 hours now, which seems like a lot for such little data (relatively… I only have a 75GB backup).

In full disclosure, I did stop the backup on Saturday as it was backing up some large and unnecessary files. I stopped the backup in the interface, and told it to stop immediately (instead of waiting for the file to complete). I then deleted those files and restarted the backup. I thought it ran without issue, but maybe it didn’t and that’s what caused it?

I also wish I had seen gabor’s recommendation before so I could have tried that instead, but I’m in it now, so I’ll let it go until it finishes, I guess.

It surprises me that it would take this long to rebuild the DB. Again, I’m running local, so if I ran over the GBE wire, I should see 1Gbitps rates for a normal backup. 75GB = 600Gbit, so theoretically I’d get this backup done in 600 seconds, or 10 minutes. Theoretical speeds aside, even if I only got 10% of throughput, I’d expect to see 100 minutes, or less than 2 hours. Why does it take so long to rebuild since that’s not actually performing a backup? (I don’t see that much activity on the drive during this process… so it’s not network limited?) Or maybe my math is way off?

Still, I’d like to know what the official fix is… or what the workaround fixes are in case this happens again.

It looks like you can touch the DB files and it solves the issue.

Before the above fix, I get the same issue in the same conditions, for more details and for the workaround see here. You can also check here and here for the detailed workaround.

1 Like

Thanks… I’ll keep that in mind in the future. My database recreate just completed this morning finally, and I ran a subsequent backup and it completed successfully, so that’s good.

I do agree with your other post, it’s unacceptable for this to happen, and the software is simply not very fault tolerant as-is today. The fact that I stopped it, and told it to stop immediately… and that broke it? That’s ridiculous - don’t provide an option for which you can break the software! If I have to wait for the current file to finish, then tell me so…

But, furthermore, as in your case, if you can’t wait for it to finish and you hibernate or whatever, that shouldn’t break it either. Sure, the backup won’t complete, but it must be able to recover… and not force you to workaround and jump through hoops to fix it (or, as in my case, wait MANY hours to fix itself). Or, at least have a repair function that works for these cases…

I’m hoping something is done to improve this quickly…