Fatal error System.Exception: Detected non-empty blocksets with no associated blocks!

@ts678 Thanks!

Well, I gathered that the problem was caused, and stored in the db, in a previous run. Because we can run an sql query (that you suggested) without doing a backup and scanning for new files, and see the corruption sitting there in the db.

Until we get more evidence, I am imagining a file was deleted (in a previous run), and Duplicati deleted it from 2 of its tables (BlocksetEntry and File), but not the third (Blockset). And Duplicati did not notice on the previous run. And so on the next run (say, the current one, which reports the error) Duplicati notices the corruption, and errors out.

Sorry for the delay. Was tied up with work the last few days.

I didn’t even think to verify the DB under advanced settings. Let me preface by saying, I did a lot of testing and trial and error with Duplicati before I decided to adopt it as my backup solution. It may be time to blow it away and reinstall everything, or actually move it to a standalone PC instead of my daily driver.

That said, for some reason my database is actually in my user>appdata path rather than in the system32 path where it should be (in fact, all of my active profile DBs are this way). I also have other issues, like when the PC starts, the server runs fine for inbound backups, but to access the web interface, I need to restart the service once after every reboot.

In any case, I ran the command again on the correct DB, and I get one hit.

ID: 449199
Length: 107185152
Hash: UqcEEX93t1sBUSKeexb8kmhikWCCr5wA6DbFLSjnf0c=

Same as you, 449199 doesn’t exist in the file list. The closest thing I really find is 4490** which is part of my Chrome user profile.

When version 2.0.4.5 came out, I started trying to use the USN policy and the new filtering system. I still don’t understand the filters fully. In 2.0.3.x I used the Exclude section > Temporary and System files. Now there is a Filtering section AND an Exclude section. I enabled most of the “Filtering > Exclude filter group” options, but I saw a huge rise in profile run time. Instead of 10 mins, my profile now took 2 hours. AFter reading other posts, I started troubleshooting by removing one filter at a time to try to find the culprit. Somewhere during all that, I got this issue. It may be related to me enabling and disabling the cache filters which likely affect Chrome cache?

Hope something here is useful.

I’m getting this error myself. Apparently when my machine got a major Windows update, it failed to bring over the configuration from C:\Windows.old\WINDOWS\System32\config\systemprofile\AppData\Local. So I stopped the service and moved the Duplicati folder from the old Windows folder to the new. I then restarted the service and immediately started getting this error. This is a personal machine so the backups are not critically important. I can run diagnostic queries if it’s useful.

I’ve now started seeing the path too long error on my secondary backup for the same source drive. This started a few days ago and nothing has changed on the configuration in weeks.

I checked the profiling logs and noticed the messages seemed to be from my browser cache folders. So I excluded one folder for Firefox, and one for Chrome where I saw the errors. On the next run, I got the blocksets error. If anyone would like to take a look I can post some logs from the previous run, as well as the one that started throwing the error. I have the profiling logs for both.

Unfortunately, now both of my backups for this drive are trashed, so I’m going to have to do something soon. I’ll be away from my PC next week, but can test or provide logs when I’m back.

Has anyone confirmed a newer version that does not get this issue? I am getting nervous about my other backups now and would rather upgrade if I can prevent the issue.

You can adapt the steps from Migrating from User to Service install on Windows, version 3, to move the working folder of duplicati to a dir that does not get re-written on windows update. Eg, C:\ProgramData\Duplicati\Data. If it’s worth it to you.

Yeah, we’ve been getting pretty consistent result. locked/temp/rapidly changing files are the culprit. It would be nice to confirm your situation is the same. Instructions here.

Well this is great. Yeah, if you could query the DB as before, then go back to the old logs and find hopefully the files around to those missing blocksetID’s, hopefully in a nice order, you should be able to infer exactly which files are the problem. this post suggests the same. That will be the first time we have been able to identify that.That would be progress.

Also, maybe you’ve identified a way to cause it at will.

backup with TEMP or CACHE.
add filter to exclude TEMP or CACHE.
get error.

Sorry I’ve taken so long to deal with this, but I can confirm almost everything you said in the linked thread. I have a single record in Blockset that is not in BlocksetEntry. And File does not contain the BlocksetID, and all of the adjacent rows in File are dup-* files.

For whatever it’s worth, I clobbered the offending row in Blockset and everything seems to be working fine now.

Have you tried running restore since that? Because backing up is just wasted time and resources, if restore isn’t working.

Ref: Backup valid, but still unrestorable?

Good point. I have tried to restore a couple things and they both worked, but that doesn’t mean it’s all good. I probably just need to start a new backup.

Sorry again for the delay. I looked into the DB and my previous logs. I don’t really see a way to correlate the missing blocksetIDs in the logs, since the logs only list the file names and they are pretty scattered. BlocksetIDs near each other don’t seem to show up in the same order in the logs.

I do see that I have 2 bad blocksetIDs in the latest DB search. One of them is in my firefox profile and one in chrome. You can see screenshots below. The bad IDs were 485448 and 491253.

sqli

I’m not sure where to go from here. Any suggestions or other things you’d like me to check?