Consistent Unexpected difference in fileset version

I would love to help you test this out - it’d be great if this problem were solved before the next beta!

If you have figured out a way to intentionally trigger the problem so we can test with and without the fix, that would be extremely helpful. (I need to read through your posts on the topic still…)

Agreed. This has been my own top backup-breaker. It took some time to track it down to the code though.

What you describe is exactly what #3800 has – steps to trigger (I hope you also can) derived from actual broken backup, then proposed fix, then testing fix using the same often-run backup that breaks without fix.

Wonderful to have your help. I think we’re both tired of guiding people through version deletes (better than nothing, but sometimes ineffective). Note the caveat on my fix though. It’s not in the code lines seen here. Possibly there are two problems, or maybe the theory in this post is wrong. The other is proven by testing.

When i made setup file it did not worked after i install on windows. The icon try do not appeared up as well as no output when i tried to open installed output.

I have served much time but did not got any feedback. If anyone could really assist on this would be great.



If you’re explaining the unrelated issue (which admittedly I mentioned, but had only a workaround for) I’d hope the discussion happens, but on the “Making a setup file from code” topic. I’d like some nice directions myself.

Do we have a rough ETA on when the proposed fixed might be merged and included in a future release? I was hoping it’d make it into the latest build or two. I got hit by this bug a couple weeks ago and have been holding off ditching the current multi TB backup and starting over.

BTW, great work @ts678 and team this one looked pretty tricky to nail down.

I see no recent note or progress on below. Disappointed because pull request was 3 days before canary.

Fix database issue where block was not recognized as being used #3801

There was a hot item to “ship”. The documented change in is Amazon Cloud Drive loss warning. Experimental yesterday is basically that canary, so maybe we see Duplicati breaking for 9 more months?

This change could be considered a fix to core logic, and possibly people are cautious about putting it in… Things are definitely broken at the moment, but it’s important to be very sure the cure causes no troubles. Ordinarily that’s what canary is for, however canary during lead-up to a beta is probably treated cautiously.

If the fix is not allowed into the beta, my plan is to patch my installation so I can get some needed stability. This can be field-patched by anyone who dares do the same, as it’s just a string edit with a debugger tool. Because many people are not willing, I’m not sure what the future will hold for this issue. I guess we’ll see.

You actually might be a good candidate to run the fix through its paces, as a test. My backups are smaller. Experimental itself could probably benefit from a workout of that size, as I’m not sure if canary sees those.

I saw the PR and decided not to roll it into the experimental. It is quite possible that it will fix many issues (and yes there are some annoying ones) but I do not want to annoy people on the “mostly-stable” channel with something untested that might have unexpected quirks.

My plan is currently is to release a beta 2019-07-15 based on the current experimental.

There are no rules stating that we should wait that long between beta releases (we really should do more frequent releases).

I have put up a new canary with the fix included. If we do not hear any bad news from that, I will roll a new experimental with thte fixes. I think the fixes are important enough that we should consider delaying the next beta to get them included. But we cannot delay for too long, as the AmzCD is shutting down on a fixed date and I want the warning out before the shutdown happens.

Thanks for the quick turn around @kenkendk! I agree the message about AmzCD shutting down is pretty important.

I grabbed the latest canary but it doesn’t seem to fix an already corrupted DB. I’ve tried deleting a couple more versions that report the missing entry but it just reports as missing in the next version.

Was I overly optimistic and the fix will only prevent future corruption and not salvage an existing DB?

Let me know if there is anything I could/should do or try if that is helpful.

Correct. I think the only sometimes-workaround has been version deletions, but the “prevent” idea isn’t completely proven either (need field experience over the long term). One cause is hopefully addressed.

If you want to get into manual database editing, coming up with a method might be a bit easier now that there’s a better understanding of one cause. It will still probably take some doing, and maybe a DB bug report with further manual sanitization due to some privacy flaws which have been added or discovered.

Thanks for confirming!

For what it’s worth I deleted a handful more versions trying the version reported, the one after, the one before with no real luck. I think I started with 40 versions and made it down to about 25 versions deleting them one at a time.

Finally it was telling me the missing entry was in version 20 of the 25 so I ran a delete with --keep-version=18 and that seems to have done the trick. Not sure that is really helpful to anyone else but it did finally work once I deleted enough versions.

1 Like

In my understanding, the block is removed from the database, even though it is in a volume. Fixing this would require re-reading t least the index file once the offending volume is known.

Thanks, this tip saved more than 1 terabyte of network traffic today. I also updated latest version, if it would have better protection against this.


I am having real trouble with this same error. Due to the notifications not working only just realised it hasn’t been backing up since 8th April, I’ve tried deleting and repairing but I keep getting the same outcome, any help will be most appreciated

Failed: Unexpected difference in fileset version 0: 08/04/2020 23:00:00 (database id: 187), found 1000145 entries, but expected 1000166
Details: Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 0: 08/04/2020 23:00:00 (database id: 187), found 1000145 entries, but expected 1000166
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction)
at Duplicati.Library.Main.Operation.TestHandler.Run(Int64 samples)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)

Welcome to the forum @Jamie_Leeke

Are you running It does hugely better.

I upgraded to that yesterday but still got the same error. It’s currently recreating the database but with it being 3Tb it’s taking a while so hopefully it will complete once that’s done. If not I think I’m just going to try recreating the task and see if that works

The error is a DB consistency check before the backup runs, to avoid making a problem worse. has the same check, but if you can get a good DB, it’s much better at keeping it good…

The original server which I opened this ticket for hasn’t reported this issue since I upgraded to, so it appears the fix rolled out has worked in this instance (I’ve had to let it run for several weeks to be sure).

Sadly we had to abandon Duplicati on another server as the upgrade process left it in an unusable state. It lost DB integrity completely and we couldn’t manage or add to the existing backup set. We couldn’t recover from this position and had to go with another backup solution.

We need to be mindful that Duplicati is both free and in beta. That said, it’s been in beta for over two years now, and the rate of development (esp time to fix issues like this) will make the difference between this becoming a trusted product and one that isn’t going to make the grade. We simply can’t have consistency or integrity issues within a backup product.

I genuinely like where this is going, and wish the Duplicati team all the best moving things forward.

Thank you, It’s still performing the database recreate but doesn’t look to have moved much since yesterday so I’ll give another couple of days and see if it moves. I realize with it being 3 Tb it will take a while.

Roughly where is it on the progress bar? The 70% to 100% range is three passes, and the last which is from 90% to 100% finishes a complete search of remote dblock files, if there is still missing information.

Viewing the live log at About --> Show log --> Live --> Verbose will give you status info on how far it is…

Or possibly you’re not that far yet. The ideal for even a large backup is to download only the small files, meaning only dlist (list of files in the backup), and dindex (index to the relatively large dblock data files).

One known slow spot even with ideal downloads is that the DB for a large backup can grow slow from having to track too many blocks. If you’re running at defaults, the block size is 100 KB, so 3 TB means about 3 million blocks’ info must be put in recreated DB. Using larger block sizes in advance can help, however this can’t be changed on an existing backup, so that’s just a hint for any future large backups.

If recreate finishes and errors, please try to make a note of the error. At this speed, don’t want to rerun.