I encountered this error for the first time today. I upgraded from 22.214.171.124 to 126.96.36.199 last week, and I have had a successful backup run since then. So it can still occur post-188.8.131.52. Sorry!
Is this written correctly? If there’s a missing “not”, are you saying old ones were fine, then 184.108.40.206 not?
Note that just upgrading on an error will leave you with an error. 220.127.116.11 is meant to prevent new ones.
It was written correctly: I upgraded to 18.104.22.168 and continued to have successful backups, and then a week later I got this error.
However, some further updates:
I tried Repair. The log said “Destination and database are synchronized, not making any changes”, so presumably it did nothing.
I tried starting the backup again… and it worked fine.
So either the repair worked silently, or the problem fixed itself anyway.
Thanks for clarifying that you had a successful backup after update to 22.214.171.124 then the problem arose.
This error is extremely generic, so it’s not surprising that a fix to one way to get it leaves some others.
One that’s fixed (or at least improved) in Canary involves hard stop, e.g. “Stop now” or killing process.
Stop now results in “Detected non-empty blocksets with no associated blocks!” #4037
Fix exceptions caused by Stop Now #4042
Mentioned above, too late for 126.96.36.199, and with limit given as “One source of this error has been fixed”
Given your update though, I don’t know what this is, but I’m glad that it’s gone away. The error is from consistency checking of the local database, and any error that creeps in can sometimes be persistent.
IF it helps, in my quest to find the solution to " [Backup - Detected non-empty blocksets with no associated blocks", I ran a find to find the backup sets and deleted the first backup set made and the started back up again. In my search for the clues as to why I got the message and why after deleting the first backup set made all the difference, I still won’t know I suppose until my backup completes and I can do random restores or a full restore. I also deselected folders in the backup job while trying to figure out which folder was the culprit. The process of deselecting and selecting folders in the backup job could also have done it. Still in the dark and learning along the way.
I get this error constantly. My remote files are now 1TB and restoring the database almost takes a full day. So whenever I get this error I cannot upload for a long time and my PC is running idle to restore a database, which gets borked anyway.
(What I can say is that it came shortly before/after I added/deleted files, which I didnt want in the backup (while it was running or not))
It doesn’t look like you reported details, to try to figure out what’s going on. Care to here, or in new topic?
Things possibly are a little better in 188.8.131.52 (what are you on?) compared to what existed at time of topic. There are still issues (I think) with things like hard stops. Canary has some fixes, but is still not perfect…
Someone who gets this constantly is ideal help for trying to isolate the problem, which is what’s needed.
I didnt report it so far, but I had to recreate the database twice.
Next time this happens I’ll create a debug log from the webview before and after.
If you mean before and after the Recreate, that’s not nearly as helpful as before and after DB error, but unpredictable errors are unpredictable… What I do is keep a series of databases via run-script-before:
rem Get --dbpath oathname value without the .sqlite suffix set DB=%DUPLICATI__dbpath:~0,-7% rem and use this to maintain history of numbered older DBs IF EXIST %DB%.4.sqlite MOVE /Y %DB%.4.sqlite %DB%.5.sqlite IF EXIST %DB%.3.sqlite MOVE /Y %DB%.3.sqlite %DB%.4.sqlite IF EXIST %DB%.2.sqlite MOVE /Y %DB%.2.sqlite %DB%.3.sqlite IF EXIST %DB%.1.sqlite MOVE /Y %DB%.1.sqlite %DB%.2.sqlite IF EXIST %DB%.sqlite COPY /Y %DB%.sqlite %DB%.1.sqlite EXIT 0
Creating a bug report only works on the current DB, but old ones can be slid in carefully (no backups!).
This goes well with a chunk of a profiling log that was left running (can grow huge) as described above.
There are less huge log levels that may also help, but the DB bug report is not a log, just a single point.
I’m not reading this very well. If this is just adding and deleting source files (totally normal), and if the error happens during backup regardless of when the changes got done, with no hard stops/reboots, that’s odd. There is a chance that an automatic compact ran and messed up unless you have no-auto-compact set, however fatal errors frequently don’t create the usual backup results log for you, so you need a debug log.
Backup job failed after days but won’t restart was a debug effort that didn’t get a chance to be run, but the only clue was a DB bug report after the fail. Timing was concerning. Did yours fail at backup start or end?
Yeah, 1-3-2021: Took four days to upload my data. Then: Detected non-empty blocksets with no associated blocks!
Donated 10 Euro. Hope you can fix this.
Is this the initial backup? If that can happen, it’s the most interesting and potentially debuggable, provided Creating a bug report button is pushed, and result made available. Otherwise there’s nothing to examine.
Making a manual backup copy of the database would be good, as the bug report lacks the original names. Some interesting find in the sanitized version might need you to get file names using directions how to do.
Backup started failing “non-empty blocksets with no associated blocks” was a recent beg for information.
Is that at the end of the backup or at start of the next? If at end, it’s the “nice” case, and info is most useful.
If it’s at the start, then you would have some prior backup logs, so please post the logs leading up to issue.
I had this error with the 184.108.40.206 version of duplicati. I tried rebuilding the database multiple times on different occasions(partially because the rebuilds took so long), but that did not solve the problem.
I upgraded to v220.127.116.11-18.104.22.168_canary_2020-05-11 since I saw “Added a database index that significantly improves database rebuilds, thanks warwickmm” in the changes. The delete and recreate that took many hours before finished in less than 5 minutes. Not only that, but the backup also succeeded after the rebuild.
Just thought I would post this in case it helps someone else.
Did you go back to 22.214.171.124 after the repair was done? I wonder if it is a good thing to stay on the canary release (I’ve read the warnings).
(I’m in the same boat at the moment, and the previous repair took about a day causing lot of traffic and high cpu, I don’t want that anymore).
I did not. Please note that it is where the db changes format from v10 to v11. From the changelog:
"NOTE: this version updates the database version from v10 to v11!
Downgrade from this version requires manually adjusting the version number in the database,
The additions can be re-applied if the database is upgraded again later."
In my opinion the current canary version is more stable than the 126.96.36.199 beta release. That being said you should be cautious in the canary channel. You might not want to upgrade to newer canary versions and instead wait for the next beta release to switch back to beta channel. I would advise against trying to downgrade from current canary to the older beta.
That is what I plan to do. Stick with the canary version I mentioned unless I have problems. Then switch back to beta at the next beta release.
Okey, so this latest canary build could not repair so I did the delete+rebuild.
This took a night+a day+a night The laptop was very busy and had to leave it on the network cable. I’m glad that I set Temp to a Ramdisk after the previous repair, I think all files from all backups have been written into that Temp folder (as blocks).
It ended up with a red box and a lot of errors, but the missed backup(s) started immediately after repair, and finished fine.
This repair issue was everytime with the backup that uses Google Drive. Although Google regularly tries to annoy the Duplicati backup, I don’t think that this causes the need for repair.
Thinking what I did the day before the failing backup. This was about the same as what caused the previous repair. I unpacked / moved / deleted many files. Perhaps 70-100 GB or so. Might this be a cause for the database meltdown?
Now find a way to go back to a more standard version …
I got this error after a comms problem whilst backing up to OneDrive for Business. It seems that Duplicati is not handeling comms issues well and corupts it’s own database. I’m trying a rebuild of the database to get round the problem, but last time I had to start the backups again.
If you have specifics (ideally a log file or database bug report or reproducible steps), that’d help find issue.
I’ve been throwing artificial comms failures at Duplicati for awhile, but I can’t see this particular problem…
It’s hard to reproduce an internet failiour in the middle of a backup. I’m happy to say the a delete and rebuild of the database fixed the error for me this time. Last time I had to recreate the backup.