Fatal error: Detected non-empty blocksets with no associated blocks

It seems to me that Duplicati isn’t very robust to failures of the program. While I was backing up for the first time (testing this out) my computer’s battery died and so the whole machine shut off. Also, after that I had to force quit Duplicati because it wouldn’t pause. Backup software like this should be able to recover from situations like that, but I’m not sure Duplicati is robust enough on those fronts yet. After those things happened, Duplicati gave me errors about a lot of things, but kept going and seemed to maybe be working? That is until I go to restore and it gives me this “non-empty blocksets” error. After attempting to restore and getting that error, I no longer see any directories when I go to restore. This isn’t very encouraging : /

Ugh, and if you rename the directory where the backup is stored, it won’t remove the backup from your home screen! It really seems like a lot more errors need to be gracefully handled. Hmm, after restarting the duplicati service, it seems to have gotten the picture and not display the backup.

1 Like

Without much technical knowledge, I may add the following experience, which occured by chance:
When Duplicati showed this error “Detected non-empty blocksets with no associated blocks”, and repair and other suggestions I’ve found here didn’t make the error go away, I finally choose to recreate the back-up. To do that, I have created a new backup from the config file of the old error-prone backup, renamed it, but - forgot to change the destination folder!
I ran it, and the above mentioned error did not reappear. I hope this new backup does what it is expected to do.
Running Duplicati 2 as a server on Win10, backup to external HD. Hope this helps.

My question: Can I trust this backup, since it still gives me a few strange errors?

2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.BackupHandler-SnapshotFailed]: Fehler beim Erstellen des Snapshots: System.UnauthorizedAccessException: Es wurde versucht, einen nicht autorisierten Vorgang auszuführen.
bei Alphaleonis.Win32.Vss.VssBackupComponents…ctor()
bei Alphaleonis.Win32.Vss.VssImplementation.CreateVssBackupComponents()
bei Duplicati.Library.Snapshots.WindowsSnapshot…ctor(IEnumerable1 sources, IDictionary2 options)
bei Duplicati.Library.Snapshots.SnapshotUtility.CreateWindowsSnapshot(IEnumerable1 folders, Dictionary2 options)
bei Duplicati.Library.Main.Operation.BackupHandler.GetSnapshot(String sources, Options options),
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\Hannah,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\Hannah,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\DefaultAppPool,
2019-04-30 05:28:48 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: C:\Documents and Settings\DefaultAppPool, … ]
Errors:
[ 2019-04-30 06:48:30 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: 6y9axTuPuScijYqpJYoSIrWjr4oCT8anZyaWz45/PCw=, size: 102400, blocksetid: 2323039, ix: 15, fullhash: +9taBJbTJQPh6PnJfFf8LwBNUwtcmipp+dAh1VRZXLM=, fullsize: 1770112,
2019-04-30 06:48:30 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 7194036 and hash 6y9axTuPuScijYqpJYoSIrWjr4oCT8anZyaWz45/PCw= and size 28136,
2019-04-30 07:33:32 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: RchjnSGyxBpoBYVUF+eTOJs01I7NwR+px7xJ6/OHNsc=, size: 102400, blocksetid: 2323154, ix: 4, fullhash: et74+L9UlZs0anjdoQfPQdLus4M60e7CcV1TdNlF44Y=, fullsize: 2504664,
2019-04-30 07:33:32 +02 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 7199469 and hash RchjnSGyxBpoBYVUF+eTOJs01I7NwR+px7xJ6/OHNsc= and size 21832 ]

My question would be: Do these errors compromise the backup?

I just received this error last night. The backup client is running Linux. The backup server is a nextcloud instance using webdav on a Linux system. I do not have snapshots turned on for this backup system.

The result of the first query is

sqlite> SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry);
ID|Length|FullHash
2359268|7804|gR5EPkHAuZLY+iQuMY0YX6tW36r3r/SAYq+K/Pr3+Fg=

The result of checking for that in the Files table is no result

sqlite> SELECT * FROM File WHERE BlocksetID = 2359268;

I’ve copied the database off to the side so that I can run any other queries on it that will help to debug this issue.

I’ve started a database repair and will see if this works. Otherwise it’ll be a few days to create a new backup.

I had this error a few months ago. I remember retrying the same backup over and over with fresh installs of duplicati. It keeps failing. Trying to repair? Has to download and unzip everything. Fell back to previous beta. Can’t believe the devs haven’t fixed yet…

The key to allow a fix is a good test case that reproduces it, and (from skimming here), I think this is still a random thing. If anybody on the thread has found a good reliable way to get it, or clue as to what causes it, please point. I’m just skimming the rather long thread, but not seeing much that would help the developers.

Other things that may help are a database bug report as mentioned earlier, or logs at as detailed a level as one can stand, also mentioned earlier. Since it’s been many posts back, that means Creating a bug report and/or –log-file with –log-file-log-level=Profiling plus --profile-all-database-queries (it will create a huge log).

Maybe easier is to let development do the heavy debugging, provided someone who hits it on a somewhat regular basis can simplify it into steps, so development can create it for debug. Filing an issue on it is best.

Hi,

I am having the same issue since some days. I read some posts, tried a repair and even a recreate of the database. Nothing helped, luckily the other backup job runs smoothly, which is the most important one.

Therefor I am offering my help to get debug logs or any information needed to get an idea why this is happening. This, and other threads, are quite long and offer lots of ideas and tasks to do, I am asking someone for help to guide me in what information is really needed to get this fixed.

The backup I am talking about holds no personal nor valuable data, and it is only 16GB in size, which makes recreating it now a big deal.

If someone could help me gathering the right information and placing it where it is needed - I am the one to do the work at the end.

Thanks

I think someone should place a guard in Streamblocksplitter.cs for the hashcollector. It’s possibly empty and so there can’t be any blocksetentrys. A check before the final insert in LocalBackupDatabase:AddBlockset would also be nice. I bet in those cases there is no entry in the specific foreach-loop. The easiest explanation at first glance would be a closed stream in the while-loop …but this can’t explain a generated “filehash” for the blockset.

Link to my comment to this issue on github: Locked files seems to cause corrupt backup · Issue #3594 · duplicati/duplicati · GitHub

Hi,

I had the same problem in a backup set.
I found a query in the log: SELECT * FROM “Blockset” WHERE “Length” > 0 AND “ID” NOT IN (SELECT “BlocksetId” FROM “BlocksetEntry”);
Only one row returned for me with the SQL Browser. I found a record in the BlocklistHash table (BlocksetID) so I deleted these 1-1 records from the BlocklistHash and BlockSet tables after a db backup.
And now my job is running again.

Gabor

1 Like

I’m basically going to say the same thing that gabor just did, but just reporting another case, and how I fixed it. I had this same issue happening on my macOS machine.

(And the obvious warnings here before anyone proceeds. I’m not a SQL expert. I’m not a Duplicati expert. Make sure you make a backup of your database before doing anything.)

  1. On a mac, the database file is in: ~/.config/Duplicati/
    It’s the long string of numbers ending in “.sqlite”. It will probably be a pretty big file.

  2. I made a copy of the sqlite file as backup.

  3. I then opened the sqlite file using https://sqlitebrowser.org/. (Or really whatever sql program you want.)

  4. I ran the query as suggested by johnvk:

SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry)

  1. That found exactly one entry in my database in the Blockset table. I deleted that entry.

  2. In duplicati, I then repaired the database (don’t know if this was necessary or not).

  3. Then the backup worked! Huzzah.

2 Likes

I also get “Detected non-empty blocksets with no associated blocks!” error.
Using Windows 7, Duplicati - 2.0.4.22_canary_2019-06-30.
Remote is Debian 8 SFTP.

I just ran into this error myself. Win10, Duplicati as a service, 2.0.4.5_beta_2018-11-28.
Remote is to a mapped network drive on my Asus router
Backups have been running fine for a while, otherwise (as far as I know - basically, I didn’t see errors).

The last backup was on Friday last week, and this just started happening after I returned from a business trip, so there was no tinkering involved along the way. I tried repair, and it didn’t find anything wrong. I’m currently in the middle of a “recreate database” which is taking a very long time (been running for about 12 hours now, which seems like a lot for such little data (relatively… I only have a 75GB backup).

In full disclosure, I did stop the backup on Saturday as it was backing up some large and unnecessary files. I stopped the backup in the interface, and told it to stop immediately (instead of waiting for the file to complete). I then deleted those files and restarted the backup. I thought it ran without issue, but maybe it didn’t and that’s what caused it?

I also wish I had seen gabor’s recommendation before so I could have tried that instead, but I’m in it now, so I’ll let it go until it finishes, I guess.

It surprises me that it would take this long to rebuild the DB. Again, I’m running local, so if I ran over the GBE wire, I should see 1Gbitps rates for a normal backup. 75GB = 600Gbit, so theoretically I’d get this backup done in 600 seconds, or 10 minutes. Theoretical speeds aside, even if I only got 10% of throughput, I’d expect to see 100 minutes, or less than 2 hours. Why does it take so long to rebuild since that’s not actually performing a backup? (I don’t see that much activity on the drive during this process… so it’s not network limited?) Or maybe my math is way off?

Still, I’d like to know what the official fix is… or what the workaround fixes are in case this happens again.

It looks like you can touch the DB files and it solves the issue.

Before the above fix, I get the same issue in the same conditions, for more details and for the workaround see here. You can also check here and here for the detailed workaround.

1 Like

Thanks… I’ll keep that in mind in the future. My database recreate just completed this morning finally, and I ran a subsequent backup and it completed successfully, so that’s good.

I do agree with your other post, it’s unacceptable for this to happen, and the software is simply not very fault tolerant as-is today. The fact that I stopped it, and told it to stop immediately… and that broke it? That’s ridiculous - don’t provide an option for which you can break the software! If I have to wait for the current file to finish, then tell me so…

But, furthermore, as in your case, if you can’t wait for it to finish and you hibernate or whatever, that shouldn’t break it either. Sure, the backup won’t complete, but it must be able to recover… and not force you to workaround and jump through hoops to fix it (or, as in my case, wait MANY hours to fix itself). Or, at least have a repair function that works for these cases…

I’m hoping something is done to improve this quickly…

Empty source file can make Recreate download all dblock files fruitlessly with huge delay #3747 is one possibility (and probably quite common) but not the only one. It’s fixed in recent canary (if you’re daring enough) or v2.0.4.21-2.0.4.21_experimental_2019-06-28 which is sort of a lead-in to an upcoming beta.

Unfortunately “no associated blocks” hasn’t been tracked down AFAIK. Ideally there would be test steps which reliably reproduces it so developers can look at it in depth, e.g. by –log-file-log-level=profiling plus --profile-all-database-queries=true. You can set that up on a –log-file if you like, but what to do with a log with some personal info in it (primarily pathnames) is a question. Ideal reproducible test is non-personal.

At least two of us have provided a way to replicate the problem - stop the backup (either with the “stop immediately” option, or via hibernate/sleep as the other user mentioned). Then when it fails and stops, retry and it should happen again. That’s what appears to have done it for us.

If you need me to replicate myself, I can generate a test backup and perform this, I just need to know exactly what log settings are required and what log files to provide. I’m happy to do this, just tell me how.

Thanks for the close reading. They originally looked more like leads than definite, but any leads are good.

  • “stop immediately” here as one of a chain of actions that preceded the problem. It might be a clue.

  • “hibernate or whatever” mentioned too as “shouldn’t break it either”, probably referring to post here.

Stop has been a trouble spot that got worse in 2.0.4.5, with some pushing for fix underway. Work is here. Interestingly an opening comment talks about it removing the stop button until corruption issues get fixed, however that corruption (at least based on the work at the bottom) was “Unexpected difference in fileset”.

It would be very helpful if you could play with it, using pathnames you don’t mind revealing, and a storage type that is very available. I usually like local file for that reason, but possibly the issue is storage-specific.
Possibly it’s also Duplicati version-specific. I did do quite a bit of stopping and sleeping but didn’t get this.

If you can get sure-fire test steps, logs become less important, but the most you can get is the following:

These go well with database bug reports which attempt to sanitize pathnames against ordinary browsers. Going past 2.0.4.5 will hit a regression though. It forgot to sanitize a new table. Use non-sensitive paths…

If you really want to get into it (and if not, a developer may), to debug issues that might have been caused awhile before the symptom showed up, sometimes keeping a trail of recent database content is valuable. Here is a script that I used while solving “Unexpected difference in fileset” test case and code clue #3800.

@herbert actually volunteered debug help earlier (thanks!) but the problem appeared more vague then. Running heavy debugs all the time gets painful. It’s a little easier if one has an idea on how to start debug. Sorry about the earlier non-response, but anybody who wants to jump into these steps is welcome to join.

Thanks, and I hope you can nail it with something anyone can do. If you get there, feel free to file an issue which is officially the tracking system (support requests are too numerous and have no tracking method).

1 Like

Is there an update on this? I can confirm that the issue seems to happen when stopping now or restarting the host mid backup.

Is this when using the “Stop now” and you get an error saying “Thread was being aborted”?

Welcome to the forum @ShaneKorbel

CheckingErrorsForIssue1400 and FoundIssue1400Error test case, analysis, and proposal #3868 found steps to reproduce that and also “Detected non-empty blocksets with no associated blocks!”. The fix by @BlueBlock went in yesterday, so should be in Canary soon, then Experimental, and Beta, BUT issue had to do with the source files changing during the backup, and I don’t know which reports here that fits. Yours sounds different, and I’ll leave it up to the individual reporters here to decide if above fix helps any.

Fix ‘stop after current file’ #3836 and Fix pausing and stopping after upload #3712 are current efforts, but I’m not sure where they stand. Note you have the developer of the first named request talking to you now.

1 Like