The storedlist is simple, and expandedlist is complex, but the counts are supposed to be the same. Something went inconsistent, so you got told. Duplicati has many self-checks. This one is doing its job… What needs to happen (IMO) is to track down ways these arise, and find a suitable developer to fix them. There may be time and expertise challenges. Not everybody has the skillsets to work on core algorithms.
I just got this error on my largest backup set, even after the litany of issues I had with it last week that i’d thought i’d solved on my own. This happened to me on my very next automatically-scheduled backup (despite it having run successfully several times when kicked-off manually earlier that day).
Repair does nothing. I’m not interested in doing another Recreate now without evidence that it’ll actually work long-term. I’m about to just give up.
After deleting version 0 (the most recent one that Duplicati was complaining about), I got this error: Duplicati.Library.Interface.UserInformationException: Found 158834 remote files that are not recorded in local storage, please run repair
Running repair manually didn’t work and it seems to have automatically triggered database recreate. Looks like that is going to take about two weeks to complete.
I’m not sure how the recreate got started – the last thing I did, after getting the “please run repair” error, was to run a repair. Then, overnight – I believe in connection with the timed nightly backup – Duplicati self-initiated a rebuild. Here’s the log:
2019-03-03 01:57:03 -05 - [Error-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraRemoteFiles]: Found 158834 remote files that are not recorded in local storage, please run repair
2019-03-03 01:57:04 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Duplicati.Library.Interface.UserInformationException: Found 158834 remote files that are not recorded in local storage, please run repair
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBac
kendWriter log, System.String protectedfile) [0x00104] in <fbbeda0cad134e648d781c1357bdba9c>:0
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x0005f] in <fbbeda0cad134e648d781c1357bdba9c>:0
2019-03-03 01:57:09 -05 - [Information-Duplicati.Library.Modules.Builtin.SendMail-SendMailComplete]: Email sent successfully using serverD$smtps://smtp.gmail.com
2019-03-03 09:11:13 -05 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started
2019-03-03 09:11:14 -05 - [Information-Duplicati.Library.Main.Operation.RepairHandler-RenamingDatabase]: Renaming existing db from [REDACTED].sqlite to [REDACTED].backup
2019-03-03 09:11:17 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2019-03-03 10:07:33 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (155.11 KB)
2019-03-03 14:29:45 -05 - [Information-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-RebuildStarted]: Rebuild database started, downloading 4 filelists
Yes, find reports exactly 158834 files in the (local) backup destination directory.
I should also note that in the few months I’ve been trying to use Duplicati, problems like this have arisen a few times. In the past, I’ve just given up, reformatted the backup destination, and started over. But an initial backup takes at least two weeks, maybe longer. I’m not sure if rebuilding is even any faster than just doing a fresh backup.
If there are 158834 files in my backup destination directory, and 158834 files missing, does that mean that rebuilding the database will require Duplicati to register a “Backend Event - Completed” in the log 158834 times? At the current rate, I’m only getting about one such entry per minute, so recreating the database will take three months. These are pretty tiny files (20-100KB) so the slow speed seems crazy. Any suggestions for why this is?
Regardless of how the recreate got started (it really shouldn’t be doing it automatically) recreate performance is s known issue.
Much of it seems to be related to inefficient SQL checking the database for already existing records. While I found some time to build some metrics for my specific test case, I haven’t gotten around to actually testing any fixes.
What generally seems to happen is the progress bar moves along until about 75% then seems to stop. Duplicati of actually still working, but it takes progressively longer to finish a percent of work the further along the process is.
In regards to recreate vs. start over it MIGHT be faster to start over, but you’llv likely use upload bandwidth re-pushing the files. You’ll also have a break with the history of previous backups.
One thing I’ve recommended to people is if they want to start over point to a new destination folder and leave the “broken” one in place. If needed, a direct restore from the “broken” destination can be done.
That way you’re not left with NO backup while the new one is “filling”. And of course you can keep the “broken” destination around for as long as you care about the history it might contain.
This is just a backup to a local (external USB) drive, so bandwidth is not an issue. At the current rate, it looks like it is pretty stable at 90 days left to recreate the database. From your comment, it sounds like if anything it will slow down rather than speed up as it makes progress, so starting fresh would be much faster (I think the first backup took about two weeks). Since this has database issue has happened several times to me, though, I’m starting to think Duplicati just isn’t workable for my current setup and I should just use rsync or rdiff-backup until Duplicati is more stable/more efficient.
I too have started getting this error on one my backup sets. I’ve not changed anything on that configuration AFAIK.
There are 16 versions of this backup set.
It runs daily.
The most recent successful run was on March 2nd.
The fileset from version 4 (2/24) has the unexpected difference in the number of entries.
Given that this is a moderately large fileset (57GB), I’d love to know what you’d suggest doing to repair it.
Failed: Unexpected difference in fileset version 4: 2/24/2019 4:03:11 AM (database id: 100), found 6180 entries, but expected 6181
Details: Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 4: 2/24/2019 4:03:11 AM (database id: 100), found 6180 entries, but expected 6181
at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency (System.Int64 blocksize, System.Int64 hashsize, System.Boolean verifyfilelists, System.Data.IDbTransaction transaction) [0x00370] in <c6c6871f516b48f59d88f9d731c3ea4d>:0
Posting a “me too” on this issue. Had it happen frequently. PITA to rebuild my backups since I’m backing up to a server @ my parent’s and they have a bandwidth cap. I generally have to grab the server and ext drive and bring them home to rebuild a backup
You don’t have to delete the entire backup and start over. All you need to do is delete the specific backup version.
It is believed that this bug has been fixed, but it hasn’t made its way into the Beta releases yet. If you are willing to use a Canary version, you’ll have access to the fix, 22.214.171.124 or newer (excluding the special 126.96.36.199 beta release).
Although I wonder if this deserves its own topic if it gets into serious analysis – when was the issue seen relative to that upgrade (probably expressed in number of seemingly successful backups before failure)?
If you think sleep/suspend is related, can you say anything more about where the backup was, and what happened on wake? Do you have any logs or other materials that might be useful to looking at the issue?
There should always be About --> Show log --> General and Remote, and Remote would give time when action happened – which might be a clue about when sleep was because there will be a jump in the time.
What sort of destination are you using? I’ve had no luck causing such a problem to several destinations… What would be great would be steps to reproduce the problem on demand in a fairly generic environment. Even if you can’t get it every time, if it happens often then maybe you could set up some more debug logs.
If you haven’t yet cleared the database problem, you might also consider posting a link to a DB bug report.
Ooops, I lost the failing set by correcting.
Among many machines I manage, 3 have a rather large duplicati setup with 3 to 8 different sets and close to 1Tb each. Among these 3, my laptop is the one complaining most often about unexpected differences but obviously also the one with most interrupted backups (the two others are file servers) as it enters sleep. I usually simply repair and restart.
I’ll try to report with more details next time.