Backup started failing "non-empty blocksets with no associated blocks"

I also have this problem. In my case the first backup always work, but every subsequent backup fail. If I re-generate the local database, the next backup will work and the others after that will fail.

It there a way to automate the re-generation of the database? Maybe a command that I could set as a Windows Scheduled task?

Also are you still interested in a “database bug report”? If so, how can I send it to you?

Thanks for your help.

Delete plus The REPAIR command but it seems a pretty extreme measure. Recreate can take awhile.

Tries to repair the backup. If no local database is found or the database is empty, the database is re-created with data from the storage.

If this is happening more than very rarely, this might be a good opportunity to debug it. But first you can try the possible fix if you’re willing to install a canary release that hasn’t had much test time. You can set your Settings back to Beta by hand to make sure a really buggy canary doesn’t show up and mess things up… Reverting from this canary to a current Beta won’t be possible due to DB changes, but next Beta will work.

v2.0.4.24-2.0.4.24_canary_2019-09-02

Fixed sporadic issue with backups of files being written, thanks @BlueBlock

but there were a series of (unrelated to that fix) build and upgrade issues which stabilized (for now) at this:

v2.0.4.28-2.0.4.28_canary_2019-09-05

CheckingErrorsForIssue1400 and FoundIssue1400Error test case, analysis, and proposal #3868
has some technical details on how backing up actively written files can lead to named bug, and also yours. You could try setting up a –log-file or using live log at About → Show log → Live → Warning to see if you can spot an Issue1400 message. They’re quite informative and may save need for a DB bug report, but if you want to make one anyway the best way to do that in the forum is to post a link to the possibly huge file.

I installed the v2.0.4.28-2.0.4.28_canary_2019-09-05 version and I lauched a couples of backups and I’m not able to reproduce the bug. Before that, when I was using the latests Beta version I had the problem immediately on the second backup after reconstructing the database.

I now get some warnings that I did not noticed before, but I guess that is a new feature. Theses warnings seems to be about locked files. You can see them in the log here.

My guess is that the issue might be fixed in this realease. Is there any more tests or information you would like me to provide?

How safe is it for me to now use this Canary version until the next Beta? Should I revert back and recreate my backups?

Thanks for your support.

The log one-line summaries unfortunately don’t talk about causes but if you saw locked file warnings in a longer message, those have existed for a long time AFAIK. Using –snapshot-policy is a good way to stop them, but it requires either running as a Windows service or getting administrative privileges another way.

Sometimes it’s hard to know for sure if the issue is intermittent, however one thought you may consider is that one way this has been seen before is when Duplicati tries to back up its own database area, which is in the user profile area. One especially troublesome file is the journal file for the backup database because it’s constantly changing, and a file that gets larger after the previous end-of-file has been seen causes this.

Database management Local database path being in your backup would be a sign that you hit that bug. Previously the workaround was to deselect DB folder, or use an exclude filter. Actual fix could replace that.

It’s hard to say after only 10 days, but it seems good so far. At some point the bug fixes outweigh any new issues. For me, that was at 2.0.4.22_canary_2019-06-30 for sure. Before that my backup broke too much.

There’s v2.0.4.21-2.0.4.21_experimental_2019-06-28 too, which has seemingly not been causing troubles, however there is unfortunately no way to retrieve collected information on what version people are running.

Before any release (even canary) goes out, there’s a suite of unit tests run, but they don’t catch everything. Generally it’s worth letting people with test systems without valuable data pick up canary first, and I’m sure some have (and I thank those people). How much longer to wait is your call, but those on 2.0.4.5 beta are on a release stemming from v2.0.3.14-2.0.3.14_canary_2018-11-08 so are lacking 10 months of bug fixes.

If 2.0.4.18 does turn up some new bug for you, you can downgrade to a certain extent, limited by database upgrades that the old version can’t deal with, it looks like you can downgrade Duplicati as far as the broken: v2.0.4.13-2.0.4.13_canary_2019-01-29, but you wouldn’t want to. 2.0.4.22 would be quite a bit less bugggy.

Alternatively you can do the deselect or exclude workaround and return to 2.0.4.5 if that’s what you’d prefer. If you upgraded from there, you have a “backup” database in the old format in your DB area, which you can copy into the standard DB spot for your backup, along with downgrade to old Duplicati. Depending on if the upgrade was done within Duplicati or by .msi file, you would either remove the new upgrade from a special spot that gets upgrades, or use the .msi to reinstall to Program Files\Duplicati 2 with the version you want.

Downgrading / reverting to a lower version covers this in more detail. As I said, my backups wants at least 2.0.4.22, but for you the fix was in 2.0.4.28 so the “how safe” question is not nearly so well answered yet…

There might also be a next Experimental, if the usual progress to Beta is followed, so if somehow the label Canary is uncomfortable, the label Experimental might feel better because it’s a more official approval of a particular Canary (often it’s just a rebuild). There seems to be a wish to push out another Canary though…

So I finally got back around to looking at this thread and I used DB browser as instructed, and I found only one ID that matched the command. How do I convert that into a purge-able file/directory? I must have missed that instruction somewhere. Once I get it, I go into Duplicati and use the purge function to remove it from the DB, correct?

Hi, just wanted to confirm that while using version 2.0.4.30 + snapshot-policy=required on Windows using OneDrive as a back-end, I had multiple consecutive backups without errors or even warnings. For me, I consider the issue solved.

I’ll continue to use this canary version until the next Beta.

Thanks for your support.

(I made a small donation via Paypal)

Because you were asking about safety, you might want to avoid the “purge” command, or keep an eye on:

Purge operations result in broken dlist files #3924 which was just filed. Can’t think of any other regressions affecting you. There’s an “FTP (Alternative)” regression that some special cases might see, but you’re not.

Old thread… But I had the same issue now in one backup. I just deleted the version=0-2 (so the last 3 days), no problem because this was a weekend and on friday there was no important change.

After that the I tried 2 manual runs and both finished with success.

Duplicati version is 2.0.5.1_beta_2020-01-18

Does it happen much? I’m still trying to find someone who gets it a lot who can do some debugging like in
Fatal error: Detected non-empty blocksets with no associated blocks (and below that, and also earlier on).
Basically, a series of databases (fairly easy to set up) plus a huge profiling log (got disk space?) may help.

Thanks for confirming it can do this, though I don’t know if it’s better or worse. There were some fixes for CheckingErrorsForIssue1400 and FoundIssue1400Error test case, analysis, and proposal #3868 mentioned above. There might still be a gap left, but I haven’t been able to hit it when testing for it.

If you’re on Windows, the gap can be removed by using snapshot-policy for VSS, per comment earlier. Forum response to that indicates that it worked for at least one person. Maybe theoretical hole should be closed whether or not any lab testing (such as mine) can get it. Or at least current code examined more.

There would be lot less guessing if there were one or more DB bug reports to review, and some logs too.

I have 39 backup jobs running since years. Some of them once a week, some every 2-3 hours. Some with Cloud-Spaces, some with local network spaces.

I had this failure the first time!

Sorry. If I get the error again I will ask what I can do before trying to fix it.

That’s good to hear because it means you’re not too impacted. It’s bad because rare bugs are hard to find.

You can’t do the ideal of collecting DB and log-file history then. That would need setup in advance, awaiting something that for you is very rare. Maybe someone who gets it often can help. Still, asking will allow some limited examination of the situation going forward. Thank you for the offer. I hope others can also help here.

I just got this error again on 2.0.5.1_beta_2020-01-18, when trying to do an initial backup. What logs and stuff should I provide?

It happens everytime I try to continue the initial backup currently.

You could post a DB bug report of failed initial backup. That might offer clues, but really a log is required. Ordinary logs are typically not detailed, but you could post whatever you can find, maybe Complete log from the job log if it got one, but often a fatal error does not, but goes into Home → About → Show log.

An Export as Command-line with suitable privacy redactions could also be posted, to get an idea of it.

Unless you’re sure you can get this to fail again, it would be best to try to preserve all the backup data as currently sitting, so maybe do a job export and use a new destination for the run to hopefully get nice log.

log-file=<path> with log-file-log-level=profiling plus setting profile-all-database queries is best but enormous and contains some personal information such as pathnames, and maybe email info, if in use.

Ideal situation would be if this were reproducible on initial backup with a small amount of data, so if it will continue to happen as you trim the backup down, that would be wonderful, but I’m not sure how possible.

Ok, i’ll try when I find the time, but I can’t promise anything. Currently, I’m facing a different issue with a different backup where I’m running in circles.

So I just had this error. I deleted version 0 and then deleted version 0 again (Did it twice). And solved my problem. This process took me only a few minutes (seconds?). I have used this same process to solve a number of issues. This is what I did…

From the Backup Set…
Choose “CommandLine” option.
Command: delete
Delete the contents of the “Commandline Arguments” box.
Replace with: --version=0
(Latest backup is 0)
(Alternatively you can run the list-broken-files command, and get the versions numbers of the broken filesets but I did not bother doing that for this issue.)

Add to advanced options with…
–no-auto-compact
–no-backend-verification