Fatal error System.Exception: Detected non-empty blocksets with no associated blocks!

I now get the same error. Repairing the database does not solve the issue. Is there a command to launch to get what orphaned files are causing the issue on the remote destination? Would removing them solve the issue?

I was able to get again functionality since I also backup separately the local job database. So I restored it to the date at which the main job was last functioning, and remotely deleted all dblock and dlist files following said date. I then repaired the database and jobs are now working again.

Hopefully I neither needed a database recreate, nor to start the job from scratch. I speculate the issue was due to crappy internet connection, and inability by Duplicati to handle this.

However, given this kind of issues in Duplicati, I find it crucial to have a separate backup of the local job database, and to disable the automatic compact feature (using --no-auto-compact). Can anybody confirm that, this way, remote dblock and dlist files are never modified again, unless manually requested? (Besides files affected by retention policy, of course)

(This is on beta 2.0.4.5)

yes @gianlucabertaina there are some steps in this post on the same error: Fatal error: Detected non-empty blocksets with no associated blocks. I and another user have followed them and got similar results. No precise filenames but filenames “nearby”. Doesn’t lead to a fix, tho, just more clarity around the circumstances of what is looking like a bug in Duplicati.

It would be great to have a third set of results from these steps, if you have the time.

Hi, I would be happy to contribute, but I was not so systematic in my
debugging, so I already deleted the faulty database, once the restored
one was proven to work.

I have this issue as well on one of my backups. It just started recently, but I didn’t notice the update to 2.0.4.5 beta causing the issue. The profile worked for a while after I updated I think. First, I noticed some errors that my file name was too long and backups were failing. Then, I noticed it was having issues backing up my new outlook OST file. So I made a filter to exclude *.OST file extensions and re-ran the backup to get a better log of the long named file. I no longer got that error, but now got this one about non-empty blocksets.

If I can help with any testing, please let me know. I have a second backup going off-site that’s still working, so I’m not in a critical state where I can’t pause for troubleshooting.

I get this error, but not the path-too-long.

If you could follow the steps in Fatal error: Detected non-empty blocksets with no associated blocks replies 3 and 6, which involve using an SQL browser (dl link provided there) and running a query, among other steps, and then posting the results it would help. Unfortunately, the file name that is messing up is not recorded. But file names “nearby” are. Might give a hint.

Interesting that your problem happened after adding a filter to exclude some files. That’s generally consistent with the theory that when Duplicati goes to remove a file, it removes it from two of its DB tables but not the 3rd.

I’m not sure if I did this correctly as I don’t mess with SQL much.

I downloaded the DB Browser for SQLite, opened the database in read-only “C:\Windows\System32\config\systemprofile\AppData\Roaming\Duplicati\7481…8475.sqlite”.

Then I pasted your code from the other thread “SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry)” without quotes.

The response I got was:
0 rows returned in 1ms from: SELECT * FROM Blockset WHERE Length > 0 AND ID NOT IN (SELECT BlocksetId FROM BlocksetEntry)

Sorry, also don’t know how to put the commands or logs into separate boxes in the thread to make them more readable.

Ok thanks @sjl1986 it sounds like you did it right.

But, those results are inconsistent with others. Quite interesting.

For sanity checks, can you ensure that there is only one .sqlite file in that directory with that naming scheme? And ensure that you’re still getting this specific error: Detected non-empty blocksets with no associated blocks. Because that sql query you did comes directly from the code which triggers this error. Also did you happen to run this with 2 different user names? I’m trying to see how the sql query in the code gets a different result than the sql browser

use ~~~ or ``` (those are backticks) before and after to make a code block.

Thanks for the code block tip. I have always run Duplicati as a service, under the local system account. This is the only SQL file that was numbered this way. I have others labeled like “backup 201812#####.sqlite” and “MEOIF…sqlite” (with all capital letters) and a Duplicati-server.sqlite.

I ran it again and below is the abbreviated output of the profiling log. I removed all but the last file before each new section to keep it short. If you want to see the whole thing I can host a cleansed copy of it somewhere.

2019-01-21 00:13:20 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\Program Files (x86)\Steam\steamapps\common\wolfenstein 3d\base\SHADERS\CRT-geom-blend.fx
2019-01-21 00:13:20 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 16000 - 1) / 16000) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 0:00:00:00.206
2019-01-21 00:13:20 -05 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "Blockset" WHERE "Length" > 0 AND "ID" NOT IN (SELECT "BlocksetId" FROM "BlocksetEntry")
2019-01-21 00:13:20 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\Program Files (x86)\Steam\steamapps\common\wolfenstein 3d\base\SHADERS\Matrix.fx
2019-01-21 00:13:20 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteScalarInt64]: ExecuteScalarInt64: SELECT COUNT(*) FROM "Blockset" WHERE "Length" > 0 AND "ID" NOT IN (SELECT "BlocksetId" FROM "BlocksetEntry") took 0:00:00:00.324
2019-01-21 00:13:20 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-IncludingPath]: Including path as no filters matched: C:\Program Files (x86)\Steam\steamapps\common\wolfenstein 3d\base\SHADERS\MCGreen.fx
2019-01-21 00:13:22 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.Exception: Detected non-empty blocksets with no associated blocks!
   at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction)
   at Duplicati.Library.Main.Operation.Backup.BackupDatabase.<>c__DisplayClass32_0.<VerifyConsistencyAsync>b__0()
   at Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass3_0.<RunOnMain>b__0()
   at Duplicati.Library.Main.Operation.Common.SingleRunner.<DoRunOnMain>d__2`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__19.MoveNext()
2019-01-21 00:13:22 -05 - [Profiling-Timer.Finished-Duplicati.Library.Main.Controller-RunBackup]: Running Backup took 0:00:00:32.195
2019-01-21 00:13:24 -05 - [Profiling-Duplicati.Library.Modules.Builtin.SendMail-SendMailResult]: Whole SMTP communication: ~~~

In general you can use the git help on formatting. It works here too. What I’ve tried so far.

I don’t know what to make of that log. @ts678 can you help? Is that file related to the error? The error is inconsistency in the current database. Old files that were stored last time. The file reported seems like part of the new scan. Maybe I just dont know how it works in enough detail.

I do like that it does report the exact sql query we are doing manually. (Actually the log has COUNT(*) whereas we have been using just “*”)

So, you got this error while running a backup? And did you try the sql query in the sql browser before and/or after this, ie, without any other Duplicati changes intervening? And you still get zero rows reported in the sql browser?

Also, another sanity check. In the job settings, if you expand out the drop-down, under “Advanced” there’s “Database…”. Can you just make sure that that location is what you used sql browser on? I’m sure it is, but just in case…

That makes two of us. Duplicati runs lots of concurrent processors that pass data to each other through channels. I discovered some documentation here of some of the ones apparently involved with backups. Possibly delays can occur. The best way to see if that happened here might be to look around inside the database to try to see the relationship between where the error happened there, and the log file names. Though I’ve only been following this loosely (thanks for orchestrating some pursuit), I think some people weren’t having luck seeing the exact bad new file, but had some luck seeing when finished files stopped.

Referring to the log then, I’d guess it would be safe to say that something that went by is what broke, but exactly which file it might have been is less clear, therefore needing matching between database and log.

Previously running the SQL test query came up clean. Presumably after the logged events, it now won’t.

@ts678 Thanks!

Well, I gathered that the problem was caused, and stored in the db, in a previous run. Because we can run an sql query (that you suggested) without doing a backup and scanning for new files, and see the corruption sitting there in the db.

Until we get more evidence, I am imagining a file was deleted (in a previous run), and Duplicati deleted it from 2 of its tables (BlocksetEntry and File), but not the third (Blockset). And Duplicati did not notice on the previous run. And so on the next run (say, the current one, which reports the error) Duplicati notices the corruption, and errors out.

Sorry for the delay. Was tied up with work the last few days.

I didn’t even think to verify the DB under advanced settings. Let me preface by saying, I did a lot of testing and trial and error with Duplicati before I decided to adopt it as my backup solution. It may be time to blow it away and reinstall everything, or actually move it to a standalone PC instead of my daily driver.

That said, for some reason my database is actually in my user>appdata path rather than in the system32 path where it should be (in fact, all of my active profile DBs are this way). I also have other issues, like when the PC starts, the server runs fine for inbound backups, but to access the web interface, I need to restart the service once after every reboot.

In any case, I ran the command again on the correct DB, and I get one hit.

ID: 449199
Length: 107185152
Hash: UqcEEX93t1sBUSKeexb8kmhikWCCr5wA6DbFLSjnf0c=

Same as you, 449199 doesn’t exist in the file list. The closest thing I really find is 4490** which is part of my Chrome user profile.

When version 2.0.4.5 came out, I started trying to use the USN policy and the new filtering system. I still don’t understand the filters fully. In 2.0.3.x I used the Exclude section > Temporary and System files. Now there is a Filtering section AND an Exclude section. I enabled most of the “Filtering > Exclude filter group” options, but I saw a huge rise in profile run time. Instead of 10 mins, my profile now took 2 hours. AFter reading other posts, I started troubleshooting by removing one filter at a time to try to find the culprit. Somewhere during all that, I got this issue. It may be related to me enabling and disabling the cache filters which likely affect Chrome cache?

Hope something here is useful.

I’m getting this error myself. Apparently when my machine got a major Windows update, it failed to bring over the configuration from C:\Windows.old\WINDOWS\System32\config\systemprofile\AppData\Local. So I stopped the service and moved the Duplicati folder from the old Windows folder to the new. I then restarted the service and immediately started getting this error. This is a personal machine so the backups are not critically important. I can run diagnostic queries if it’s useful.

I’ve now started seeing the path too long error on my secondary backup for the same source drive. This started a few days ago and nothing has changed on the configuration in weeks.

I checked the profiling logs and noticed the messages seemed to be from my browser cache folders. So I excluded one folder for Firefox, and one for Chrome where I saw the errors. On the next run, I got the blocksets error. If anyone would like to take a look I can post some logs from the previous run, as well as the one that started throwing the error. I have the profiling logs for both.

Unfortunately, now both of my backups for this drive are trashed, so I’m going to have to do something soon. I’ll be away from my PC next week, but can test or provide logs when I’m back.

Has anyone confirmed a newer version that does not get this issue? I am getting nervous about my other backups now and would rather upgrade if I can prevent the issue.

You can adapt the steps from Migrating from User to Service install on Windows, version 3, to move the working folder of duplicati to a dir that does not get re-written on windows update. Eg, C:\ProgramData\Duplicati\Data. If it’s worth it to you.

Yeah, we’ve been getting pretty consistent result. locked/temp/rapidly changing files are the culprit. It would be nice to confirm your situation is the same. Instructions here.

Well this is great. Yeah, if you could query the DB as before, then go back to the old logs and find hopefully the files around to those missing blocksetID’s, hopefully in a nice order, you should be able to infer exactly which files are the problem. this post suggests the same. That will be the first time we have been able to identify that.That would be progress.

Also, maybe you’ve identified a way to cause it at will.

backup with TEMP or CACHE.
add filter to exclude TEMP or CACHE.
get error.

Sorry I’ve taken so long to deal with this, but I can confirm almost everything you said in the linked thread. I have a single record in Blockset that is not in BlocksetEntry. And File does not contain the BlocksetID, and all of the adjacent rows in File are dup-* files.

For whatever it’s worth, I clobbered the offending row in Blockset and everything seems to be working fine now.

Have you tried running restore since that? Because backing up is just wasted time and resources, if restore isn’t working.

Ref: Backup valid, but still unrestorable?