The error in the topic title has been solved by running repair. So I am not directly asking for support in this. Rather, I am trying to figure out why this error seems to come up rather frequently. Because I think it shouldn’t happen. Since I just setup a new duplicati install (on Win 10), it’s pretty straight forward what I did:
After installing duplicati, it recognized an old database from a previous install. I deleted the database and changed the destination from Amazon Cloud to Backblaze B2 (completely new and empty B2 account). I also unselected all source folders except for a few which weighed about 9 GB. I also removed the amazon specific settings from the backup job and told it to go ahead and backup.
That was yesterday. I don’t think the backup completed yesterday before I shutdown the computer so when I turned it on this evening, it continued backing up. At that time it was not yet running as a service. I didnät logout the entire evening but I switched to another user and back.
When the backup completed (or perhaps it actually completed a second time, because the dashboard says “2 versions”), I turned off the tray icon and switched to duplicati as service. I then wanted to do a manual re-run and that’s when I got the error
Found 240 remote files that are not recorded in local storage, please run repair
Any ideas what might have caused those lost files and how it could be avoided in future? Couldn’t duplicati at least try a repair by itself instead of whining to the used?
Very good point! Unfortunately, I cannot say with absolute certainty as I have already started a new backup after I added some more source files, but when I just checked the folder on Backblaze directly, it contained 243 files so that I have a very strong feeling that you are right in suspecting that duplicati did not know about any of the files. That might also explain why the error message popped up pretty much immediately after I hit “run”.
But what does it mean in terms of why it knew nothing?
My GUESS would be that the local sqlite database file is corrupted, missing, has permissions issues, etc.
I know this doesn’t quite fit the scenario, but if the sqlite file is actually there perhaps it could be read due to something like:
sqlite file was in a ‘locked’ state due to the shutdown (really shouldn’t happen unless MAYBE it was a hard powerdown)
sqlite file was in use by one user (or service) then switching users caused a second connection attempt to the already-in-use file
Now that I think of it, if you SWITCHED between users instead of logging out/in it’s possible one user still has the old tray based GUI in it’s startup folder so maybe it grabbed and locked the database?
Did you check the Stored logs for something during the timeframe when you saw the error?
No matter what the cause, it would make sense to have a meatier error message. Maybe the once the mismatch is found some additional checks could be done to narrow it down to things that might include:
local database file is locked (how’d I do that?)
local database file is empty (uh-oh)
local database file is missing (uh-oh)
local database file is out of sync with destination (uh-oh)