Repair and Rebuild database fails: Unexpected difference in fileset


#1

Hello there,
I’ve been using Duplicati on my windows 10 machine for 2-3 years now. A few days ago I realized that it stopped making backups in the beginning of December:

Blockquote
Dec 11, 2018 2:49 PM: Reporting error gave error
System.ObjectDisposedException: Cannot write to a closed TextWriter.
at System.IO.__Error.WriterClosed()
at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder)
at Duplicati.Server.WebServer.RESTHandler.DoProcess(RequestInfo info, String method, String module, String key)

Dec 12, 2018 12:00 PM: Failed while executing “Backup” with id: 1
Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 49: 11.09.2018 13:00:01 (database id: 121), found 112183 entries, but expected 113124 at Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction) at Duplicati.Library.Main.Operation.Backup.BackupDatabase.<>c__DisplayClass32_0.<VerifyConsistencyAsync>b__0() at Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass3_0.<RunOnMain>b__0() at Duplicati.Library.Main.Operation.Common.SingleRunner.<DoRunOnMain>d__21.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Duplicati.Library.Main.Operation.BackupHandler.&lt;RunAsync&gt;d__19.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task) at Duplicati.Library.Main.Controller.&lt;&gt;c__DisplayClass13_0.&lt;Backup&gt;b__0(BackupResults result) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]&amp; paths, IFilter&amp; filter, Action1 method) at Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter) at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue) <

I hit “repair database”, the other day, which failed. Tried again - failed again, with this error :

Jan 9, 2019 10:45 AM: Failed while executing “Repair” with id: 1
Duplicati.Library.Interface.UserInformationException: Failed to repair, after repair 2 blocklisthashes were missing
at Duplicati.Library.Main.Database.LocalRepairDatabase.FixMissingBlocklistHashes(String blockhashalgorithm, Int64 blocksize)
at Duplicati.Library.Main.Operation.RepairHandler.RunRepairCommon()
at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)
at Duplicati.Library.Main.Controller.Repair(IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue) <

So I hit “rebuild database”. Duplicati worked on this for more than 24 hours and seemed stuck. “Abort after file has been written” did not change anything: duplicati keeps working… so I aborted.

Just tried to update to see if a bugfix exists:

Jan 10, 2019 11:12 AM: Error in updater
System.Net.WebException: The remote server returned an error: (502) Bad Gateway.
at System.Net.WebClient.DownloadFile(Uri address, String fileName)
at Duplicati.Library.AutoUpdater.UpdaterManager.CheckForUpdate(ReleaseType channel)<

As I said, I have a Windows 10 (Laptop). On my 500Gb SSD, there are about 350Gb of data. I estimate that 300Gb should get backed up.
The target is reached through sftp protocol (ssh), on my own freenas based server.

Any advice on what to do? I could create a second backup (start from scratch) and delete the first one once the new one is in place - but I don’t think that this is the right way to go.
I’d be very thankful for any advice.


#2

Hello @dejhost and welcome to the forum!

Thanks for your patience in trying so many things. Did you happen to save the original database with the “Unexpected difference in fileset”? These are sometimes a pretty simple fix. The later errors are tougher, however the final “Error in updater” looks like the update server is simply having problems again. Are you running beta channel, e.g. 2.0.4.5 beta? If so, there’s nothing newer in beta (though canary has several). There’s not anything clearly noted that is a bugfix for the issues, so I wouldn’t jump to recommend canary.


#3

Hello ts678,
and thank you for your welcome!
I actually meant to include it in my initial post:

You are currently running Duplicati - 2.0.4.5_beta_2018-11-28

I use the default update channel.

No, unfortunately, I didn’t save the original database. I wasn’t even aware that this is possible - though it sounds very reasonable :slight_smile:

How long should a regeneration of the database take (about 350gb of data on an ssd)? Minutes? Hours? Days?
Any advice on what to do? Would you recommend to try and fix it, or just start a new backup?


#4

I’ve seen forum comments about an automatic save into a dated file, but for me, Recreate made a basically empty database, Mine was called backup 20190110030506.sqlite and was 120KB in File Explorer summary. Possibly you’ll be luckier, otherwise it would have needed a manual copy of the “Local database path” seen. Please look around for a backup about the right size and date. If one exists, “fix” may be possible. If not, no.

This is VERY hard to say. What shows up in the forum are anecdotes, probably biased towards the too-long (sometimes weeks) – but it likely depends on lots of factors. Sometimes people watch About --> Show log --> Live --Profiling to see it’s moving, and guess completion time, but it’s not clear that time extrapolates linearly.

There was at least one report from someone who thought starting a new backup was faster, if that’s the goal, however that has obvious disadvantages if your old versions are important. Personally I tend to use Duplicati more for short-term restores, so restarts aren’t a big deal. On my destination, it’s also cheaper (free upload).

What’s the current situation? The article title was “Unexpected difference in fileset”, but is that the original or current, which sounds like additional issues came up? If you have nothing close to a working database, then fix is out, so it’s recreate or start from scratch. For those who want to get running soon but also want to keep history, hybrid approaches are possible. For example, an export/import/change goes into a new space, while work with the damaged old one is done. If you have a spare system, you could even bring it into that effort…

Backing up 300 GB could take awhile, and sometimes people also do it in stages (doing most important first). Not immediately taking your old backup off the backend gives you some cover, and meanwhile you can even direct restore without a database if need be, and it might run faster because it builds only a partial database.

Partial database recreate might also be a faster option for your permanent database, e.g. just latest version, and reduce uploads. It’d be an experiment, likely best done (if at all) when you don’t mind risking old backup.

In short, picking a path depends on what (if anything) is left to work with, priorities/needs, and your ambitions.


#5

Thank you once again for taking the time to answer so detailed.

Based on your comments, I have decided to be rather safe than sorry (from now on):

  1. Took my laptop to the server, and hooked it up by a fast Ethernet connection (1Gb/s)
  2. Started a new backup, based on the former config (except for the connection settings, and the targetdirectory of course).
  3. Will keep the old backup for some time untouched. In some months time I will be sure that there is no need for these files anymore. Then, I can perform some tests: I think first and foremost of trying to restore files in order to find out what I could have restored in case there would have been need.

#6

I’m still getting errors, while making the first (new) backup:

Blockquote
Errors: [
2019-01-12 07:42:36 +01 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: XAC5dEv+Fm1r7yVTrT0ZIjAbjCfhslKQ36/eLh8s5Gw=, size: 72352, blocksetid: 136892, ix: 163, fullhash: wB9Ac2KVVcgkyo6+ldl61RLb4CT394OpGoTpo5cI34E=, fullsize: 16763552,
2019-01-12 07:42:36 +01 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 1536063 and hash XAC5dEv+Fm1r7yVTrT0ZIjAbjCfhslKQ36/eLh8s5Gw= and size 51832
]
BackendStatistics:
RemoteCalls: 18
BytesUploaded: 262784509
BytesDownloaded: 59623977
FilesUploaded: 13
FilesDownloaded: 3
FilesDeleted: 0
FoldersCreated: 0
RetryAttempts: 0
UnknownFileSize: 0
UnknownFileCount: 0
KnownFileCount: 5028
KnownFileSize: 130034944068
LastBackupDate: 12.01.2019 07:42:06 (1547275326)
BackupListCount: 2
TotalQuotaSpace: 0
FreeQuotaSpace: 0
AssignedQuotaSpace: -1
ReportedQuotaError: False
ReportedQuotaWarning: False
ParsedResult: Error
Version: 2.0.4.5 (2.0.4.5_beta_2018-11-28)
Messages: [
2019-01-12 07:42:06 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started,
2019-01-12 07:42:13 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: (),
2019-01-12 07:42:14 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (4.90 KB),
2019-01-12 07:43:02 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b216a452361f641189b6c0372cc3f3569.dblock.zip.gpg (48.45 MB),
2019-01-12 07:43:05 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b216a452361f641189b6c0372cc3f3569.dblock.zip.gpg (48.45 MB),

Searching the web tells me that the reason for this error might be that files that are currently transferred are beeing changed. But this must be happening all the time - or should I only be running backups during night???


#7

Still the same issue:
Errors: [
2019-01-21 09:35:33 +01 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: tYRdC0NRPdZoyjGFytfX186Kq3WL0w9q09TNEJp3+cA=, size: 76752, blocksetid: 140448, ix: 164, fullhash: hme7Jh9Y/rPFL81gWs7I3KCmvgZuGeFYvdefw/GEeTk=, fullsize: 16870352,
2019-01-21 09:35:33 +01 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 1631174 and hash tYRdC0NRPdZoyjGFytfX186Kq3WL0w9q09TNEJp3+cA= and size 35712
]

Is this something to ignore? Please advice - I’m not feeling too good knowing that there are occuring errors during backup.