Is there any way to fix an "unexpected difference in fileset" error other than deleting?

Are you able to reproduce the problem with a local drive destination (not an external drive)?

Not easily, because there is not enough local space to make a copy of the data being backed up. Is the error generated based on the combination of what’s in the sqlite file as well as the actual backup chunks on the external drive?

Yes - the database thinks there should be X files but the destination has Y, so Duplicati errs on the side of caution and let’s you know there’s something unexpected going on.

One possible wrinkle is all my external drives are formatted EXFAT with the idea that in a disaster recovery situation, that will be the easiest to mount on any platform (Windows/Linux/OSX). I know Duplicati creates a large number of files; perhaps there is a Linux performance issue with that filesystem. I think read/write is no slower with EXFAT but maybe the particular filesystem operations Duplicati is doing are slowing things down. If someone thinks this is a likely cause, I can try to set up a new backup drive formatted with, e.g., ext3 or ext4.

Comparison is internal. Queries done below:

The storedlist is simple, and expandedlist is complex, but the counts are supposed to be the same. Something went inconsistent, so you got told. Duplicati has many self-checks. This one is doing its job… What needs to happen (IMO) is to track down ways these arise, and find a suitable developer to fix them. There may be time and expertise challenges. Not everybody has the skillsets to work on core algorithms.

My mistake - I may have been thinking of a different error.

I agree that messages like this are Duplicati doing it’s job - it just needs to go the extra mile(s) to avoid the problem and/or walk the user through the solution.

I’ll get on that as soon as I win the lottery. :slight_smile:

I just got this error on my largest backup set, even after the litany of issues I had with it last week that i’d thought i’d solved on my own. This happened to me on my very next automatically-scheduled backup (despite it having run successfully several times when kicked-off manually earlier that day).

Repair does nothing. I’m not interested in doing another Recreate now without evidence that it’ll actually work long-term. I’m about to just give up. :cold_sweat:


Repair finishes almost instantly and claims this, even just moments after the above is seen when trying to run a backup:

After deleting version 0 (the most recent one that Duplicati was complaining about), I got this error:
Duplicati.Library.Interface.UserInformationException: Found 158834 remote files that are not recorded in local storage, please run repair
Running repair manually didn’t work and it seems to have automatically triggered database recreate. Looks like that is going to take about two weeks to complete.

I don’t know of anything that would automatically trigger a database recreate (@kenkendk, please correct me if I’m wrong) so it sounds like your local database went away somehow.

I’m not sure how a version delete would do that… An I correct inI assuming you’ve got 158834 files at the destination?

I’m not sure how the recreate got started – the last thing I did, after getting the “please run repair” error, was to run a repair. Then, overnight – I believe in connection with the timed nightly backup – Duplicati self-initiated a rebuild. Here’s the log:

2019-03-03 01:57:03 -05 - [Error-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraRemoteFiles]: Found 158834 remote files that are not recorded in local storage, please run repair
2019-03-03 01:57:04 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Interface.UserInformationException: Found 158834 remote files that are not recorded in local storage, please run repair
  at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBac
kendWriter log, System.String protectedfile) [0x00104] in <fbbeda0cad134e648d781c1357bdba9c>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x0005f] in <fbbeda0cad134e648d781c1357bdba9c>:0 
2019-03-03 01:57:09 -05 - [Information-Duplicati.Library.Modules.Builtin.SendMail-SendMailComplete]: Email sent successfully using serverD$smtps://smtp.gmail.com
2019-03-03 09:11:13 -05 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started
2019-03-03 09:11:14 -05 - [Information-Duplicati.Library.Main.Operation.RepairHandler-RenamingDatabase]: Renaming existing db from [REDACTED].sqlite to [REDACTED].backup
2019-03-03 09:11:17 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2019-03-03 10:07:33 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (155.11 KB)
2019-03-03 14:29:45 -05 - [Information-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-RebuildStarted]: Rebuild database started, downloading 4 filelists

Yes, find reports exactly 158834 files in the (local) backup destination directory.

I should also note that in the few months I’ve been trying to use Duplicati, problems like this have arisen a few times. In the past, I’ve just given up, reformatted the backup destination, and started over. But an initial backup takes at least two weeks, maybe longer. I’m not sure if rebuilding is even any faster than just doing a fresh backup.

If there are 158834 files in my backup destination directory, and 158834 files missing, does that mean that rebuilding the database will require Duplicati to register a “Backend Event - Completed” in the log 158834 times? At the current rate, I’m only getting about one such entry per minute, so recreating the database will take three months. These are pretty tiny files (20-100KB) so the slow speed seems crazy. Any suggestions for why this is?

Regardless of how the recreate got started (it really shouldn’t be doing it automatically) recreate performance is s known issue.

Much of it seems to be related to inefficient SQL checking the database for already existing records. While I found some time to build some metrics for my specific test case, I haven’t gotten around to actually testing any fixes. :frowning:

What generally seems to happen is the progress bar moves along until about 75% then seems to stop. Duplicati of actually still working, but it takes progressively longer to finish a percent of work the further along the process is.

In regards to recreate vs. start over it MIGHT be faster to start over, but you’llv likely use upload bandwidth re-pushing the files. You’ll also have a break with the history of previous backups.

One thing I’ve recommended to people is if they want to start over point to a new destination folder and leave the “broken” one in place. If needed, a direct restore from the “broken” destination can be done.

That way you’re not left with NO backup while the new one is “filling”. And of course you can keep the “broken” destination around for as long as you care about the history it might contain.

This is just a backup to a local (external USB) drive, so bandwidth is not an issue. At the current rate, it looks like it is pretty stable at 90 days left to recreate the database. From your comment, it sounds like if anything it will slow down rather than speed up as it makes progress, so starting fresh would be much faster (I think the first backup took about two weeks). Since this has database issue has happened several times to me, though, I’m starting to think Duplicati just isn’t workable for my current setup and I should just use rsync or rdiff-backup until Duplicati is more stable/more efficient.

I too have started getting this error on one my backup sets. I’ve not changed anything on that configuration AFAIK.

2.0.4.5 (2.0.4.5_beta_2018-11-28)

There are 16 versions of this backup set.
It runs daily.
The most recent successful run was on March 2nd.
The fileset from version 4 (2/24) has the unexpected difference in the number of entries.

Given that this is a moderately large fileset (57GB), I’d love to know what you’d suggest doing to repair it.

 Failed: Unexpected difference in fileset version 4: 2/24/2019 4:03:11 AM (database id: 100), found 6180 entries, but expected 6181
    Details: Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 4: 2/24/2019 4:03:11 AM (database id: 100), found 6180 entries, but expected 6181
      at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x00370] in &lt;c6c6871f516b48f59d88f9d731c3ea4d&gt;:0

Posting a “me too” on this issue. Had it happen frequently. PITA to rebuild my backups since I’m backing up to a server @ my parent’s and they have a bandwidth cap. I generally have to grab the server and ext drive and bring them home to rebuild a backup :angry:

You don’t have to delete the entire backup and start over. All you need to do is delete the specific backup version.

It is believed that this bug has been fixed, but it hasn’t made its way into the Beta releases yet. If you are willing to use a Canary version, you’ll have access to the fix, 2.0.4.22 or newer (excluding the special 2.0.4.23 beta release).

How do you delete a specific version? It says the error is with version 0

I can provide a quick rundown but you’ll find more detail in numerous other threads. Try using search function of the forum to find the relevant posts.

But the quick rundown:

  • Go to the main Duplicati Web UI
  • Click the backup set that is having the issue
  • Click the “Commandline …” link
  • Pick “delete” in the Command dropdown
  • Scroll to the bottom and pick “version” from the Add Advanced Option dropdown
  • Type the version number that is causing you issues (in your case, “0”)
  • Click the “Run delete command now” button

Good luck!

2 Likes

I’m facing the same issue on my linux laptop.
I just upgraded to 2.0.4.28.

Backups are often interrrupted by by sleep/suspend. I don’t mind teh current backup will need to restart, but troubled by the borken situation in the end.

Worked!!! Thanks! Will keep this in mind for future issues.

Good to hear! Hopefully this problem will be a thing of the past once the fix gets in the Beta releases…