On about half of my backup jobs, I’m getting this failure:
Failed: Found 68 remote files that are not recorded in local storage, please run repair
Details: Duplicati.Library.Interface.UserInformationException: Found 68 remote files that are not recorded in local storage, please run repair
at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile)
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
— End of stack trace from previous location where exception was thrown —
at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
at Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.b__0(BackupResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)
It started yesterday (3/8/21). I fixed it by going into each client machine, running a db repair (which caused several index type errors), but a re-run of the job fixed that, or so I thought. Now, I’m seeing the exact same pattern.
I think it is a permission issue, but the db’s are all local in c:\programdata\duplicati\data on each client machine. I’m running snap nextcloud on ubuntu 18.04
This only affects about half of my clients. The others all backup with no errors as if nothing was amiss.
I’m not sure this is a database failure. Duplicati is saying that it sees unexpected files on the back end.
Each backup job is using a unique destination, correct? You can use the same back end, but at a minimum they must be using unique subfolders. Trying to have two computers back up to the same location is one possible cause for the problem you’re seeing.
(An alternative is to use the prefix option but I don’t recommend that unless it’s the only option. Subfolders are safer and easier in my opinion.)
Looking at the code, I only see this thrown where there are unaccounted files on the remote side.
Duplicati was coded to not like that possibly because you don’t need the local db to restore files but can point directly at the remote folder and restore.
If the local db is getting messed up, suppose its possible as well, because they do have some simple bugs hanging around.
I believe that the local db would have to be missing files though did not look at the code that closely. It is possible that if permission was temporarily blocked from writing to the db, it is a possible other situation. Some others could be if it was replaced with an older version or a hard drive error and the file goes missing, or even AV software removing it, etc. Some or of these would be void since there are multiple computers involved but if you have the same setup on all then eg AV software removing it, blocking writing, etc are all possible on multiple.
But, I would assume first drwtsn32’s idea.
Thanks Xavron and Doc for the quick response!
All backups are going to separate folders on the nextcloud backend.
I believe the issue was on my end with the nextcloud internal database getting out of sync with the file system due to a previously undiagnosed heat error on one of my arrays. I’m keeping fingers crossed that fixing the array (more fans!) and running the occ:scan command to re-sync the NC database will do the trick.
Unfortunately, once the error is generated, it requires me to login to the client machines and run a repair and new backup cycle. A minor inconvenience if that is all that is required. I will continue to do testing throughout the day and after 12 hours or so, I’ll report back.