Help with Unexpected Difference in Fileset

Hi,

Sorry, this question might have been answered several times already. However, I went through a number of articles about this issue, but was not able to resolve it with the information found in these articles.

My backup is giving the following error:

Unexpected difference in fileset version 2: 2019-05-12 19:16:11 (database id: 8), found 899913 entries, but expected 899914

According to what I found in other posts, I shoud delete the version in question via the command line. When I try to do this, I am getting the following output:

C:\Users\Tournesol>“C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe” delete “file://Y:\backup fast alles 2\” --version=2

Verschlüsselungspassphrase eingeben: ********

Database file does not exist: C:\Users\Tournesol\AppData\Local\Duplicati\IYQQHANOJQ.sqlite
Update “2.0.4.23_beta_2019-07-14” detected

Looking into the graphical user interface, I see that the database mentioned for this backup is a different one:

C:\Users\Tournesol\AppData\Local\Duplicati\69738978737565668368.sqlite

Any help is highly appreciated. I am running into this error from time to time. In earlier cases, I tried to repair the database (which does not help) and to recreate it (which never finishes). I always ended up in removing the whole backup.

Hello @toblak and welcome to the forum!

There are a whole lot of options that need to be set appropriately to do a delete from a Command Prompt for a GUI backup job. Doing the delete in the GUI Commandline for the backup will solve that by supplying options for a backup command, then all you must do is edit backup into delete. One method is to change Command to backup and change Commandline arguments box into --version=2 or Add advanced option.

Almost everything uses a database. I think your delete just invented one because you gave it no –dbpath.

Whether or not this fixes the original problem isn’t known, but at least it may let you see if delete can help.

Hello @ts678,

Thanks for your quick reply. Your instructions helped me to execute the delete command. It is a bit puzzling though that the -dbpath option is not mentioned in the documentation of that command.

Unfortunately, deleting the version did not fix the problem. After I had deleted version 2 and ran the backup again, I got a message about unexpected difference in fileset in version 1 and after deleting version 1, a similar message for version 0.
Deleting version 0 did not help either. After deleting version 0 and running the backup again, I got the very same message about version 0 (same values for “found” and “expected”).

Anything else, I can do?

I might have heard of SQL DB savvy users editing the database, but I wouldn’t suggest that for everyone…

Possibly you found or tried some other things people attempted, while you were looking for articles earlier?

One thing that might be helpful at least for future reference would be if you’re willing to post a bug report so someone can see what your database looks like. Pathnames are sanitized for privacy against ordinary DB browsers, but a deep forensic analysis might still find them. There’s also a privacy defect in next true beta. 2.0.4.23 beta is just 2.0.4.5 beta plus a warning about Amazon Cloud Drive going away. Since you’ve seen this issue for awhile, you might also consider trying to narrow it down to see if you can find what causes it.

Fatal error: Detected non-empty blocksets with no associated blocks is a guide to those who are pursuing, however for privacy reasons (if they matter to you), very detailed logging is best done with non-private files.

Bug report is running for almost five hours now. I fear that it is hanging. I am going to delete this backup in the weekend and create a new one.
On the next occurance of this problem, I will try again to create a bug report. Thank you for your help, so far.

Another question in this context: When making a backup of the database, I saw that it is stored in %appdata% on the local disk and not on the backup destination. Is the database not needed for restoring files?

It is not absolutely necessary for restore (I recently did a restore without), but it significantly speeds up the metadata processing that is needed for restore.

If you want to back up the local database(s), you’d better create a separate backup job that only backs up these databases, except its own (because it is the one in active use). So if you are in a disaster situation and do not have access to the local databases, the restore operation from this separate job is faster and safer than from some other backup set.

Got another incident of this issue:

Unexpected difference in fileset version 2: 2019-08-26 20:43:52 (database id: 40), found 854609 entries, but expected 854611

Tried creating the bug report before making any repair attempt, but it kept running for more than a day, before I stopped it.

“Unexpected difference in fileset” test case and code clue #3800 led to fix below in the following release: v2.0.4.22-2.0.4.22_canary_2019-06-30

Fixed data corruption caused by compacting, thanks @ts678 and @warwickmm

which unfortunately was after v2.0.4.21-2.0.4.21_experimental_2019-06-28, which didn’t become a beta.

You can check your job log from the backup before your problem to see if compact shows any statistics.

If so, one avoidance of a repetition is to set –no-auto-compact=true (which prevents re-use of free space) or install a fixed version. 2.0.4.22 did well over two months of testing, however 2.0.4.28 canary is the latest canary. If it does well (probably over a shorter period of testing), then it will likely lead to beta at some point.

Note that due to database format updates, downgrading (e.g. to 2.0.4.23 which is basically 2.0.4.5) is hard, but upgrade to next feature beta (or experimental) should work. You should also change your Settings to a more stable channel (Beta for example) if you install Canary. You can change to Canary temporarily if you want to upgrade that way (instead of using full installer), then change it back before a broken canary ships.

EDIT: Upgrading would be to avoid future problems. Current problem is dealt with (or not) as shown earlier.

Thanks for the information provided. I set -no-auto-compact to true for all my backup jobs and will see within a few months, whether this helps. The corrupted backup, I am going to delete and recreate. (I have a lot of redundancy in my backup jobs).

Currently, I am running Duplicati - 2.0.4.23_beta_2019-07-14 and am on the default (beta) upgrade channel. I am not sure whether or not you are advising to upgrade to 2.0.4.28 canary.

Advising upgrade to canary is always a tough call, especially for one so new. I’d feel quite comfortable advising 2.0.4.22 (and I ran my personal backup on that to avoid too-frequent “Unexpected difference”) however in general canary is bleeding-edge not-very-proven stuff, so staying on it can get “surprising”, though sometimes the features and fixes outweigh any accidental breakage. Testers are also needed.

With your level of redundancy, a good plan might be to start on relatively-well-proven 2.0.4.22, upgrade shortly to 2.0.4.28 if 2.0.4.22 is good, and stay on that until next experimental (probably) or actual beta.

Your idea of just burning some extra storage will also work (assuming this is the issue – you can study logs), then hopefully before the storage waste gets too high you can get into safer channel than canary.