The ...
. section above is probably where it broke. It would be nice to have more info about that gap.
Possibly you don’t have a log-file=<path> with at least log-file-log-level=retry set up to record the log.
This is often the case, but when actively trying to debug, logs (and maybe DB bug reports) may help.
One guess would be step 4 “Duplicati tries to delete file XXXYXXX, N times, and then gives up / gets terminated by max runtime Window.” but which one is it, and what does “max runtime Window” do? There’s no runtime limit that Duplicati provides, as far as I know. Is there an external monitor in use?
Because you ask, I think “I think by checking the source, it’s very easy to see …” is rarely how it goes, however it goes better if it’s less of a mystery and more of Steps to reproduce
, as Issues requests.
After interrupted backup, next backup failed with remote files that are not recorded in local storage #4485
didn’t get any steps to reproduce, but you can see some of the chat about SQL DBs and transactions.
403 error during compact forgot a dindex file deletion, getting Missing file error next run. #4129
also gets into them. Duplicati uses SQL DB commit and rollback, among other things. Are you expert?
I’ll point to some code in case anyone wants to look. I’m not a C# or SQL expert, but my ideas here are:
You can see above that there’s tracking of the remote volumes using states. They are defined in code at:
Other commentary from the original author is at Recover automatically from interrupted backups #1243 .
It looks to me at first glance like an extra file must be unknown to Remotevolume
table, not just odd state.
A theory that it deleted the file and forgot that it deleted it would have to explain how DB row got removed.
It would be nice to have clues from the log messages in a Retry
level log, or even at Information
level.
As you can see in the above code and comments, delete is supposed to be verified. Can you find a flaw?
For searching in source code, a lot of times one can find the state handling by searching for enum name.
I don’t know for sure that this failure is a Compact, but one can see the state change and later actions at:
and its requirement is to be able to list all files (see the backend.List()
at top of RemoteListAnalysis
) which might be more than one wants after every delete. There’s also the point that it shouldn’t be needed, referring to “(which I think shouldn’t be necessary after all).” in original post. It is verified, just not instantly, which possibly means you find out later. I’m not immediately blaming a server, but server logs might help.
I don’t want to pick further at the “Just for fun link” on file transfer, but while Duplicati can’t rename remote files, it does a similar operation on upload retries by uploading the same contents under a different name. When an upload failure is reported by the server, one never knows if that file made it up (all or part) or not. Rather than make assumptions, the idea seems to be to use state tracking to try to do the right thing later.
Odd results can occur. For example, here I’m seeing a file that actually made it promoted from Uploading.
Problem is, it’s a file that’s not needed now (because it was presumed lost) and it’s not hooked up right…
Remote file referenced … but not found in list, registering a missing remote file #4586