Is there any way to fix an "unexpected difference in fileset" error other than deleting?

Regardless of how the recreate got started (it really shouldn’t be doing it automatically) recreate performance is s known issue.

Much of it seems to be related to inefficient SQL checking the database for already existing records. While I found some time to build some metrics for my specific test case, I haven’t gotten around to actually testing any fixes. :frowning:

What generally seems to happen is the progress bar moves along until about 75% then seems to stop. Duplicati of actually still working, but it takes progressively longer to finish a percent of work the further along the process is.

In regards to recreate vs. start over it MIGHT be faster to start over, but you’llv likely use upload bandwidth re-pushing the files. You’ll also have a break with the history of previous backups.

One thing I’ve recommended to people is if they want to start over point to a new destination folder and leave the “broken” one in place. If needed, a direct restore from the “broken” destination can be done.

That way you’re not left with NO backup while the new one is “filling”. And of course you can keep the “broken” destination around for as long as you care about the history it might contain.

This is just a backup to a local (external USB) drive, so bandwidth is not an issue. At the current rate, it looks like it is pretty stable at 90 days left to recreate the database. From your comment, it sounds like if anything it will slow down rather than speed up as it makes progress, so starting fresh would be much faster (I think the first backup took about two weeks). Since this has database issue has happened several times to me, though, I’m starting to think Duplicati just isn’t workable for my current setup and I should just use rsync or rdiff-backup until Duplicati is more stable/more efficient.

I too have started getting this error on one my backup sets. I’ve not changed anything on that configuration AFAIK.

2.0.4.5 (2.0.4.5_beta_2018-11-28)

There are 16 versions of this backup set.
It runs daily.
The most recent successful run was on March 2nd.
The fileset from version 4 (2/24) has the unexpected difference in the number of entries.

Given that this is a moderately large fileset (57GB), I’d love to know what you’d suggest doing to repair it.

 Failed: Unexpected difference in fileset version 4: 2/24/2019 4:03:11 AM (database id: 100), found 6180 entries, but expected 6181
    Details: Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 4: 2/24/2019 4:03:11 AM (database id: 100), found 6180 entries, but expected 6181
      at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x00370] in <c6c6871f516b48f59d88f9d731c3ea4d>:0

Posting a “me too” on this issue. Had it happen frequently. PITA to rebuild my backups since I’m backing up to a server @ my parent’s and they have a bandwidth cap. I generally have to grab the server and ext drive and bring them home to rebuild a backup :angry:

You don’t have to delete the entire backup and start over. All you need to do is delete the specific backup version.

It is believed that this bug has been fixed, but it hasn’t made its way into the Beta releases yet. If you are willing to use a Canary version, you’ll have access to the fix, 2.0.4.22 or newer (excluding the special 2.0.4.23 beta release).

How do you delete a specific version? It says the error is with version 0

I can provide a quick rundown but you’ll find more detail in numerous other threads. Try using search function of the forum to find the relevant posts.

But the quick rundown:

  • Go to the main Duplicati Web UI
  • Click the backup set that is having the issue
  • Click the “Commandline …” link
  • Pick “delete” in the Command dropdown
  • Scroll to the bottom and pick “version” from the Add Advanced Option dropdown
  • Type the version number that is causing you issues (in your case, “0”)
  • Click the “Run delete command now” button

Good luck!

2 Likes

I’m facing the same issue on my linux laptop.
I just upgraded to 2.0.4.28.

Backups are often interrrupted by by sleep/suspend. I don’t mind teh current backup will need to restart, but troubled by the borken situation in the end.

Worked!!! Thanks! Will keep this in mind for future issues.

Good to hear! Hopefully this problem will be a thing of the past once the fix gets in the Beta releases…

Although I wonder if this deserves its own topic if it gets into serious analysis – when was the issue seen relative to that upgrade (probably expressed in number of seemingly successful backups before failure)?

If you think sleep/suspend is related, can you say anything more about where the backup was, and what happened on wake? Do you have any logs or other materials that might be useful to looking at the issue?

There should always be About → Show log → General and Remote, and Remote would give time when action happened – which might be a clue about when sleep was because there will be a jump in the time.

What sort of destination are you using? I’ve had no luck causing such a problem to several destinations… What would be great would be steps to reproduce the problem on demand in a fairly generic environment. Even if you can’t get it every time, if it happens often then maybe you could set up some more debug logs.

If you haven’t yet cleared the database problem, you might also consider posting a link to a DB bug report.

Ooops, I lost the failing set by correcting.
Among many machines I manage, 3 have a rather large duplicati setup with 3 to 8 different sets and close to 1Tb each. Among these 3, my laptop is the one complaining most often about unexpected differences but obviously also the one with most interrupted backups (the two others are file servers) as it enters sleep. I usually simply repair and restart.
I’ll try to report with more details next time.

checked in on my duplicati and noticed this blocking error:

Unexpected difference in fileset version 1: 5/6/2020 11:17:55 AM (database id: 131), found 136802 entries, but expected 138074

not sure if its related but it is when i turned on suspend on this machine(KDE Neon) for the first time :man_shrugging: i get the same when i try to manually force run it.