Fatal error: Detected non-empty blocksets with no associated blocks

Hi,

Ok, i will try recreate first and if it isn’t sufficiant, i will delete DB and remote.

Delete DB : is it just from the button “delete” in section Maintenance of the Web UI ?
Delete Remote : what is the way to do that : must i delete files with a file explorer or is there an builtin function in Duplicati ? If yes, what is the simpler and/or recommended way ?

Thanks :slight_smile:

Yes, but the Recreate (delete and repair) button I was talking about is to the right of that.
The combination button will revert the Delete if the recreate done by the Repair doesn’t work…

Alternatively, you can rename the database file manually, if you want to make sure you have it. Presumably the idea is that keeping it (even with issues) might facilitate further efforts to solve.

image

I don’t know of a button other than the “Delete remote files” checkbox on Deleting a backup job configuration which also offers an “Export configuration” so that the configuration can be saved.

Accessing the remote manually may be easier than above, if you can do it. You can even do file movement into a different folder (if you have enough space) if you want to be most conservative.

The combination of no database and no remote files would be a completely fresh start of backup, however you might also think about how long it will take for initial uploads, and if there’s a priority.

Previously the advice would be to start with a small initial backup, maybe of most important work, following that with additional increments. Situation is hardest if computer (e.g. laptop) must move.

2.0.5.1 has a “Stop after current file” which lets one do graceful (though sometimes slow) stops if required. Avoid the “Stop now” button, as it’s a harder stop, and this sometimes causes problems.

Hello :slight_smile:

“Recreate” database worked, at least for my first backup after this.

Thanks a lot for all your explanations :slight_smile:

Bye

1 Like

I encountered this error for the first time today. I upgraded from 2.0.4.5 to 2.0.5.1 last week, and I have had a successful backup run since then. So it can still occur post-2.0.5.1. Sorry!

Is this written correctly? If there’s a missing “not”, are you saying old ones were fine, then 2.0.5.1 not?
Note that just upgrading on an error will leave you with an error. 2.0.5.1 is meant to prevent new ones.

It was written correctly: I upgraded to 2.0.5.1 and continued to have successful backups, and then a week later I got this error.

However, some further updates:
I tried Repair. The log said “Destination and database are synchronized, not making any changes”, so presumably it did nothing.
I tried starting the backup again… and it worked fine.

So either the repair worked silently, or the problem fixed itself anyway.

Thanks for clarifying that you had a successful backup after update to 2.0.5.1 then the problem arose.

This error is extremely generic, so it’s not surprising that a fix to one way to get it leaves some others.
One that’s fixed (or at least improved) in Canary involves hard stop, e.g. “Stop now” or killing process.

Stop now results in “Detected non-empty blocksets with no associated blocks!” #4037
Fix exceptions caused by Stop Now #4042
Mentioned above, too late for 2.0.5.1, and with limit given as “One source of this error has been fixed”

Given your update though, I don’t know what this is, but I’m glad that it’s gone away. The error is from consistency checking of the local database, and any error that creeps in can sometimes be persistent.

IF it helps, in my quest to find the solution to " [Backup - Detected non-empty blocksets with no associated blocks", I ran a find to find the backup sets and deleted the first backup set made and the started back up again. In my search for the clues as to why I got the message and why after deleting the first backup set made all the difference, I still won’t know I suppose until my backup completes and I can do random restores or a full restore. I also deselected folders in the backup job while trying to figure out which folder was the culprit. The process of deselecting and selecting folders in the backup job could also have done it. Still in the dark and learning along the way.

1 Like

I get this error constantly. My remote files are now 1TB and restoring the database almost takes a full day. So whenever I get this error I cannot upload for a long time and my PC is running idle to restore a database, which gets borked anyway.

(What I can say is that it came shortly before/after I added/deleted files, which I didnt want in the backup (while it was running or not))

It doesn’t look like you reported details, to try to figure out what’s going on. Care to here, or in new topic?
Things possibly are a little better in 2.0.5.1 (what are you on?) compared to what existed at time of topic. There are still issues (I think) with things like hard stops. Canary has some fixes, but is still not perfect…
Someone who gets this constantly is ideal help for trying to isolate the problem, which is what’s needed.

I didnt report it so far, but I had to recreate the database twice.
Next time this happens I’ll create a debug log from the webview before and after.

If you mean before and after the Recreate, that’s not nearly as helpful as before and after DB error, but unpredictable errors are unpredictable… What I do is keep a series of databases via run-script-before:

rem Get --dbpath oathname value without the .sqlite suffix
set DB=%DUPLICATI__dbpath:~0,-7%
rem and use this to maintain history of numbered older DBs
IF EXIST %DB%.4.sqlite MOVE /Y %DB%.4.sqlite %DB%.5.sqlite
IF EXIST %DB%.3.sqlite MOVE /Y %DB%.3.sqlite %DB%.4.sqlite
IF EXIST %DB%.2.sqlite MOVE /Y %DB%.2.sqlite %DB%.3.sqlite
IF EXIST %DB%.1.sqlite MOVE /Y %DB%.1.sqlite %DB%.2.sqlite
IF EXIST %DB%.sqlite COPY /Y %DB%.sqlite %DB%.1.sqlite
EXIT 0

Creating a bug report only works on the current DB, but old ones can be slid in carefully (no backups!).
This goes well with a chunk of a profiling log that was left running (can grow huge) as described above.
There are less huge log levels that may also help, but the DB bug report is not a log, just a single point.

I’m not reading this very well. If this is just adding and deleting source files (totally normal), and if the error happens during backup regardless of when the changes got done, with no hard stops/reboots, that’s odd. There is a chance that an automatic compact ran and messed up unless you have no-auto-compact set, however fatal errors frequently don’t create the usual backup results log for you, so you need a debug log.

Backup job failed after days but won’t restart was a debug effort that didn’t get a chance to be run, but the only clue was a DB bug report after the fail. Timing was concerning. Did yours fail at backup start or end?

Yeah, 1-3-2021: Took four days to upload my data. Then: Detected non-empty blocksets with no associated blocks!

Not good.

Donated 10 Euro. Hope you can fix this.

Is this the initial backup? If that can happen, it’s the most interesting and potentially debuggable, provided Creating a bug report button is pushed, and result made available. Otherwise there’s nothing to examine.

Making a manual backup copy of the database would be good, as the bug report lacks the original names. Some interesting find in the sanitized version might need you to get file names using directions how to do.

Thank you.

Backup started failing “non-empty blocksets with no associated blocks” was a recent beg for information.

Is that at the end of the backup or at start of the next? If at end, it’s the “nice” case, and info is most useful.
If it’s at the start, then you would have some prior backup logs, so please post the logs leading up to issue.

I had this error with the 2.0.5.1 version of duplicati. I tried rebuilding the database multiple times on different occasions(partially because the rebuilds took so long), but that did not solve the problem.

I upgraded to v2.0.5.106-2.0.5.106_canary_2020-05-11 since I saw “Added a database index that significantly improves database rebuilds, thanks warwickmm” in the changes. The delete and recreate that took many hours before finished in less than 5 minutes. Not only that, but the backup also succeeded after the rebuild.

Just thought I would post this in case it helps someone else.

Did you go back to 2.0.5.1 after the repair was done? I wonder if it is a good thing to stay on the canary release (I’ve read the warnings).

(I’m in the same boat at the moment, and the previous repair took about a day causing lot of traffic and high cpu, I don’t want that anymore).

I did not. Please note that it is where the db changes format from v10 to v11. From the changelog:
"NOTE: this version updates the database version from v10 to v11!

Downgrade from this version requires manually adjusting the version number in the database,
The additions can be re-applied if the database is upgraded again later."

In my opinion the current canary version is more stable than the 2.0.5.1 beta release. That being said you should be cautious in the canary channel. You might not want to upgrade to newer canary versions and instead wait for the next beta release to switch back to beta channel. I would advise against trying to downgrade from current canary to the older beta.

That is what I plan to do. Stick with the canary version I mentioned unless I have problems. Then switch back to beta at the next beta release.

1 Like

Okey, so this latest canary build could not repair so I did the delete+rebuild.
This took a night+a day+a night :crazy_face: The laptop was very busy and had to leave it on the network cable. I’m glad that I set Temp to a Ramdisk after the previous repair, I think all files from all backups have been written into that Temp folder (as blocks).
It ended up with a red box and a lot of errors, but the missed backup(s) started immediately after repair, and finished fine.

This repair issue was everytime with the backup that uses Google Drive. Although Google regularly tries to annoy the Duplicati backup, I don’t think that this causes the need for repair.

Thinking what I did the day before the failing backup. This was about the same as what caused the previous repair. I unpacked / moved / deleted many files. Perhaps 70-100 GB or so. Might this be a cause for the database meltdown?

Now find a way to go back to a more standard version …