*broken* Release: (canary) 2019-01-29

This version has a bug that breaks the database if you upgrade from a previous version. Consider not using this version. See details below.

  • Added tests and improved code quality, thanks @warwickmm
  • Changed the internal storage of paths to use a prefix method. This should reduce the size of the database significantly and enable much faster database queries later on
  • Increased timeouts for reading the output from the commandline process to allow long running background jobs

Just upgraded a Windows machine, and the backups that worked fine a few hours ago now fail:

2019-01-29 10:50:00 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 9: 23/10/2018 14:57:12 (database id: 44), found 39742 entries, but expected 40964

I tried a verify which does the same, and a repair finds no problem. Happens on both a local SMB backup and a backup to Wasabi.

What should I do other than revert back to the previous version?

Same has happened on my Fedora box, so I’m going to revert that to see if it sorts it, then probably revert the Windows machine as well.

I’m back on the previous canary - needed to restore the databases, but that seems to have worked although not helped by the terrible naming of the backups. You could at least keep the original database name in the name of the backup file. Had to use the timestamp and file sizes to work out which one was which.

@warwickmm @Pectojin Could this be related to the upgrade from the previous file-path storage?

Hmm, I didn’t get anything in my tests, but running the new version I see the same errors

2019-01-29 14:22:24 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 31: 12/16/2018 12:26:24 PM (database id: 26), found 247779 entries, but expected 247807
  at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x00370] in <d63d0968bc0e415f8996d6f39de67a1f>:0 
  at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass33_0.<VerifyConsistencyAsync>b__0 () [0x00000] in <d63d0968bc0e415f8996d6f39de67a1f>:0 
  at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass3_0.<RunOnMain>b__0 () [0x00000] in <d63d0968bc0e415f8996d6f39de67a1f>:0 
  at Duplicati.Library.Main.Operation.Common.SingleRunner.DoRunOnMain[T] (System.Func`1[TResult] method) [0x000b0] in <d63d0968bc0e415f8996d6f39de67a1f>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x003cd] in <d63d0968bc0e415f8996d6f39de67a1f>:0 

I toggled disable-filelist-consistency-checks and the backup runs and restores fine. (Don’t do this at home)

Maybe the consistency check parses old fileset files wrong? That would explain me not catching it in a new backup created with the changed db layout.

With the declared database change (and reported user experience) we might want to highlight in the OP (and in GUI changes log, if possible) that downgrading from this version will likely require a database restore.

Yes, it’s best to restore if you want to downgrade. I think technically the database view is identical, but it will obviously fail the database version check since it was changed.

Same error message for me on Duplicati Windows. Not doing backups and holding my horses now.

Same issue here, updated to this morning. Backup throws error:

Unexpected difference in fileset version 9: 1/18/2019 1:42:00 AM (database id: 94), found 131228 entries, but expected 135416

The last successful backup was early AM yesterday, the 28th. I tried a repair that reported success, but following backup gives same error. I didn’t want to do a Recreate (delete and repair) since it would take about 2 weeks to complete. Is there anything else I should do or simply pause the backups until an update/fix is available to download?

I downloaded the dlist from the mentioned date, unencrypted, unzipped, and parsed the paths out of the filelist.json file.

# cat filelist.json  | jq '.[].path' | wc -l

And then I queried my database for all file paths on the given fileset id (26)

select File.path from FilesetEntry 
inner join File on FilesetEntry.FileID = File.ID 
where FilesetEntry.FilesetID = 26

Which provided 247807 rows.

So there’s definitely something funky. The database and the file agrees but somehow it’s not parsed right by Duplicati.

Oh… Database holds duplicate paths, when I select distint I get the number 247779


When I remove Duplicate lines and run diff on the two files there are actually differences, although only 33 differences.

I also queried my pre-upgrade database for the same paths. It does not contain duplicates and is identical to the fileset on the remote destination.

I am starting to think the DB upgrade script may be to blame.

I’m going to try to recreate the DB, which usually takes a 2-3 hours.

I’ve started receiving this error since I upgraded some 10 days ago from duplicati- to Duplicati-, and it’s killing me, as fixing it seems to take a long time (as per others’ comments), while myself haven’t managed to fix it yet, and resorted to recreating some affected sets, and just letting the others generate the error while I ponder a downgrade.

So, what I want to say here is that this Unexpected difference in fileset version x error is certainly not introduced in this release ( of 2019-01-29). I hope this helps somehow in looking for the bug.

Recreating database worked

As in a database recreate stopped the “Unexpected difference in fileset” issues? (Meaning your DB update script theory is likely correct.)

Did you b try DB repair before the recreate?

@SamSirry, thanks for the info. I think the error has multiple causes, one of which appears to be updating to :frowning:

Yes, recreate returned my backup to running normally.

I did not try repair.

I’m a bit disappointed that our testing did not catch this. :cry:

I tried to replicate this by creating a backup configuration using, running a backup, upgrading to, and then running the backup again. However, I did not encounter any issues. Am I missing a step?

I have the same error “Unexpected difference in filesetversion… found xxx entries, but expected yyy”

Windows-System, upgdate Duplicati by install new msi-File. No other step.

It looks like the change of an existing database is the problem, which is also why it was not found during testing, as we usually start from scratch (at least in the unittests).

Somehow the change to the new database layout causes duplicate path entries. Then we should be able to simply update any path references to the same and drop the rest.

@Pectojin can you provide a database before and after the upgrade which shows the problem? Otherwise I will try to recreate with one of my own backups.
Edit: I have recreated the error with my own database, and I can debug from there.

1 Like

The update script is certainly the problem.
When it creates the new FileLookup table and inserts the entries from the File table, it does not copy over the ID column, causing all entries to get new IDs which is why they do not match.

The code is:

/* Build the path lookup table */
INSERT INTO "FileLookup" ("Path", "PrefixID", "BlocksetID", "MetadataID")

  SUBSTR("Path", LENGTH("ParentFolder") + 1) AS "Path", 
  "ID" AS "PrefixID", 

(SELECT "Path", "BlocksetID", "MetadataID",
    CASE SUBSTR("Path", LENGTH("Path")) WHEN  '/' THEN
        rtrim(SUBSTR("Path", 1, LENGTH("Path")-1), replace(replace(SUBSTR("Path", 1, LENGTH("Path")-1), "\", "/"), '/', ''))
        rtrim("Path", replace(replace("Path", "\", "/"), '/', ''))
    END AS "ParentFolder"
FROM "File") "A" INNER JOIN "PathPrefix" "B" ON "A"."ParentFolder" = "B"."Prefix";

I have tested this query instead on my dataset and it correctly updates the ID’s and seems to pass all tests.

/* Build the path lookup table */
INSERT INTO "FileLookup" ("ID", "Path", "PrefixID", "BlocksetID", "MetadataID")

  SUBSTR("Path", LENGTH("ParentFolder") + 1) AS "Path", 
  "B"."ID" AS "PrefixID", 

(SELECT "ID", "Path", "BlocksetID", "MetadataID",
    CASE SUBSTR("Path", LENGTH("Path")) WHEN  '/' THEN
        rtrim(SUBSTR("Path", 1, LENGTH("Path")-1), replace(replace(SUBSTR("Path", 1, LENGTH("Path")-1), "\", "/"), '/', ''))
        rtrim("Path", replace(replace("Path", "\", "/"), '/', ''))
    END AS "ParentFolder"
FROM "File") "A" INNER JOIN "PathPrefix" "B" ON "A"."ParentFolder" = "B"."Prefix";

My suggestion is that we push out an update with this change ASAP to avoid hurting more users, and then deal with those who have already updated.

From my database, it seems the mapping ID’s are lost during the update and cannot be recovered. However, the previous database should exist as a backup, so it should be possible to do:

  1. Install version
  2. Remove the current (broken) database, and copy in the backup
  3. Run a backup (which will update correctly)

Only (2) should cause problems. As pointed out elsewhere, the naming of the backups does not indicate what it is a backup of, so some guessing / handholding is required here.

1 Like