Several backup attempts failed

Using Duplicati 2.0.5.1_beta_2020-01-18 on Win8.1
After a long time of smooth backup runs, today one of my 3 jobs failed:

  • Usually the job took ~1:30 mins (a source of 6,8GB), this time it tried for more than 7 mins, as indicated by a constant upload activity of ~12MBit/s in Task Manager (with occasional interrupts). It started scanning the newly added files, but then showed no more progress.
  • It gave up with “at least one error occured”. The “general” log showed no error, but also no success entry. The “Remote” showed a lot of “put duplicati … zip.aes”, but no error message.
  • “Server logs” (in “About”) finally reported "18. Jun. 2020 19:33: Failed while executing “Backup” with id: 16 " with a lengthy message (see bottom).

Anyone can make sense of this?
Is there any thing I can do to get this up and running again?

PS: Same problem one day later. No progress in progress bar froze after “382 files (0 bytes) to do at 1.30 MB/s” (speed varied in last digit, so it’s not a freeze)

System.AggregateException: Mindestens ein Fehler ist aufgetreten. —> System.AggregateException: The channel “BackendRequests” is retired —> CoCoL.RetiredException: The channel “BackendRequests” is retired

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei CoCoL.Channel`1.d__32.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

— Ende der internen Ausnahmestapelüberwachung —

bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

— Ende der internen Ausnahmestapelüberwachung —

bei CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)

bei Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.b__0(BackupResults result)

bei Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)

bei Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter)

bei Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

—> (Interne Ausnahme #0) System.AggregateException: The channel “BackendRequests” is retired —> CoCoL.RetiredException: The channel “BackendRequests” is retired

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei CoCoL.Channel`1.d__32.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

— Ende der internen Ausnahmestapelüberwachung —

bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()

—> (Interne Ausnahme #0) CoCoL.RetiredException: The channel “BackendRequests” is retired

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei CoCoL.Channel`1.d__32.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.BackupHandler.d__20.MoveNext()<—

—> (Interne Ausnahme #1) System.AggregateException: Mindestens ein Fehler ist aufgetreten. —> System.IO.IOException: Von der Übertragungsverbindung können keine Daten gelesen werden: Die Verbindung wurde geschlossen.

bei System.Net.ConnectStream.EndWrite(IAsyncResult asyncResult)

bei System.IO.Stream.<>c.b__53_1(Stream stream, IAsyncResult asyncResult)

bei System.Threading.Tasks.TaskFactory1.FromAsyncTrimPromise1.Complete(TInstance thisRef, Func`3 endMethod, IAsyncResult asyncResult, Boolean requiresSynchronization)

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Utility.Utility.d__11.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Backend.WEBDAV.d__36.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__21.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass17_0.<b__0>d.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__18.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__18.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__17.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__15.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.<b__13_0>d.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.<b__13_0>d.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei CoCoL.AutomationExtensions.d__10`1.MoveNext()

— Ende der internen Ausnahmestapelüberwachung —

—> (Interne Ausnahme #0) System.IO.IOException: Von der Übertragungsverbindung können keine Daten gelesen werden: Die Verbindung wurde geschlossen.

bei System.Net.ConnectStream.EndWrite(IAsyncResult asyncResult)

bei System.IO.Stream.<>c.b__53_1(Stream stream, IAsyncResult asyncResult)

bei System.Threading.Tasks.TaskFactory1.FromAsyncTrimPromise1.Complete(TInstance thisRef, Func`3 endMethod, IAsyncResult asyncResult, Boolean requiresSynchronization)

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Utility.Utility.d__11.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Backend.WEBDAV.d__36.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__21.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.<>c__DisplayClass17_0.<b__0>d.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__18.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__18.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__17.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.d__15.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.<b__13_0>d.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei Duplicati.Library.Main.Operation.Backup.BackendUploader.<b__13_0>d.MoveNext()

— Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde —

bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()

bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

bei CoCoL.AutomationExtensions.d__10`1.MoveNext()<—

<—

<—

Are these all on the same system, going onto the same destination?

From bits of the log, it looks like this is WebDAV. Can you describe?

after Google Translate, seems to be saying

→ (Internal exception # 0) System.IO.IOException: No data can be read from the transmission connection: The connection was closed.

at System.Net.ConnectStream.EndWrite (IAsyncResult asyncResult)

May be network or server failure. BackendUploader suggests an upload.
This is why I want to learn more about the destination. Any logs for that?

You could certainly test it. Trivial one is “Test connection” on Destination.
More complex is to set up an empty folder and adjust Export As Command-line URL for the empty folder.
Then use the URL with Duplicati.CommandLine.BackendTester.exe to see if that shows an upload issue.

I’m not entirely sure where things stand now, but the subject indicates a persistent error. Getting a better log would be another option. The big log post just shows where it ended, and not what happened earlier.

While there are some hints on the stack, About → Show log → Live → Retry might show better history.
Although your backup isn’t big, if it’s too cumbersome posting from the screen, a better log can be had by
–log-file somewhere and –log-file-log-level=retry for starters (and possibly level adjusts will be done later).

EDIT:

With most networking problems, restarting the equipment can be helpful if you haven’t tried doing that yet.

You haven’t mentioned database work yet, which means you might still have old logs. You could check for “RetryAttempts” in “BackendStatistics” in “Complete log” to see if you were having prior signs of trouble…

Thanks for all your hints which I tried to meticulously work through :slight_smile:

Are these all on the same system, going onto the same destination?
From bits of the log, it looks like this is WebDAV. Can you describe?

Yes, the jobs all backup data from different local shares (on a NAS) to the same destination root path (with different subfolders), which is a German Telekom cloud, via WebDAV. All three ran smoothly for months. Two of the jobs still do, just one of them (id:16) is failing since June 18. Thats why I doubt there is a general persistent connection problem with the destination.I also checked for diskspace on the cloud drive and there is plenty remaining (>GBs).

Trivial one is “Test connection” on Destination.

As expected: no problems with this test. Same result for the BackendTester.

a better log can be had by –log-file-file somewhere and –log-file-log-level=retry

Here is the interesting part of the log:

2020-06-22 11:44:19 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-bbaa1a037fc8c4fbd803affad9af41f3f.dblock.zip.aes
2020-06-22 11:44:19 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b2f7581aa158d41e1b88d5c3551d0c753.dblock.zip.aes
2020-06-22 11:44:19 +02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Deleting: duplicati-b89a845ae82314652aef54dcc75b663cf.dblock.zip.aes
2020-06-22 11:44:19 +02 - [Warning-Duplicati.Library.Main.Operation.Backup.UploadSyntheticFilelist-MissingTemporaryFilelist]: Expected there to be a temporary fileset for synthetic filelist (160, duplicati-i5d2d8b36c8154c709aae1815d0d137d8.dindex.zip.aes), but none was found?
2020-06-22 11:44:37 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b117da5947e7a4f1eb7e8868cb79855cc.dblock.zip.aes (49,95 MB)
2020-06-22 11:44:37 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b96586a2c8b83463795bfaf6d8cf99542.dblock.zip.aes (49,94 MB)
2020-06-22 11:44:41 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b1d1ffee4ac474776b70af97e4045f026.dblock.zip.aes (50,00 MB)
2020-06-22 11:44:41 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b7e863af312c8484c8d8736512270956e.dblock.zip.aes (49,97 MB)
2020-06-22 11:45:37 +02 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-b117da5947e7a4f1eb7e8868cb79855cc.dblock.zip.aes attempt 1 of 5 failed with message: Von der Übertragungsverbindung können keine Daten gelesen werden: Die Verbindung wurde geschlossen.
System.IO.IOException: Von der Übertragungsverbindung können keine Daten gelesen werden: Die Verbindung wurde geschlossen.
<… followed by call stack trace …>

It indeed says in a nutshell “Operation PUT failed with message: Can’t read data from connection. Connection was closed”. I’m not clear if the read failure is a result of the closed connection, or if the connection was closed, because no more read was possible … ?

I’m not quite clear how to read your comment

May be network or server failure. BackendUploader suggests an upload.
This is why I want to learn more about the destination. Any logs for that?

(How) can I create more logs on the backend?

restarting the equipment …

I usually reboot my machine once a day

you could check for “RetryAttempts” in “BackendStatistics” in “Complete log” to see if you were having prior signs of trouble…

In the “Remote” log history for this job starting June 18 (first failure) I only see tons of “Put” lines (with a few “List” interspersed), but no errors. Interestingly I see a reasonable mixture of “Puts” and “Gets” for the times prior, where the backup still worked. No “Gets” after June 17. Not sure if this was what you were looking for…

It depends on the backend and whose it is. For your case, I guess it’s German Telekom you’d need to ask.
Web hosting services would (I think) usually keep logs. I don’t know about WebDAV. I don’t use either one.

Although chances of this being it are looking lower that this is it, it can be any machine in the network path.
For my own consumer-grade equipment, sometimes it’s the PC, or its Wi-Fi adapter, or router, or modem.

Viewing the log files of a backup job (i.e. <job> → Show log) is the log with the statistics. Too many logs…

As a guess, seeing the backup start upload of 4 different dblock files at once (one can tell they’re different because the sizes are different), they’d all be competing for upload bandwidth, so maybe some timed out.

Reducing option below from its default 4 to something low (like 1) for a test might be a useful experiment:

image

however I don’t know why the other backups wouldn’t be failing the same way unless size is very different.

I identified the files that were added since last Wed and temporarily removed the ones that were easy to extract:

  1. a single MS-WORD file (which usually was displayed last, when the job got stuck)
  2. a folder with 95MB (92 files, 4 folders) - one *.lnk, rest *.jpg
  3. a folder with 223MB (40 files, 1 folder) - one *.lnk, rest *.jpg

Then I ran the job in question (time stamp hh:mm for cross-reference to my log files):

  • 18:09 with 1, 2 and 3 removed: Completed with 1 (unrelated) warning, no errors. HOWEVER: the log file showed the same pattern as the failed job, but with a small change: While the log of the failures this morning said in the first occurrence"Operation Put with file duplicati-…dblock.zip.aes attempt 1 of 5 failed with message: Von der Übertragungsverbindung können keine Daten gelesen werden: Die Verbindung wurde geschlossen" meaning “… No data can be read from the connection. Connection was closed.”
    The log of the failure this evening read (1st occurrence) “Operation Put with file duplicati-…dblock.zip.aes attempt 1 of 5 failed with message: In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.” meaning “… No data can be written to the connection. Existing connection was closed by remote host.”
    BTW: The log shows that the BackendUploader-RetryPut tries over again (up to “6 of 5” times :-)) before it gives up - which makes the job take usually ~7 mins before abort.

  • 18:14 reran same job, without change in files: job completed successfully - no errors, no warnings, log file clean

  • 18:17 adding file 1): job completed successfully - no errors, no warnings, log file clean

  • 18:20 adding folder 2): job completed successfully - no errors, no warnings, log file clean

  • 18:30 adding folder 3): job failed with same connection error as before, progress bar got stuck at “382 files (0 bytes) to do”

  • 18:42 removed 1 subfolder in 3): failed with “read” error in 1st occurrence, progress bar got stuck at “0 files (0 bytes) to do”

  • 19:12 removed 5 more (jpg) files in 3): same failure

  • 19:16 removed all other (jpg) files, except for one: job completed successfully - no errors, no warnings, log file clean (# of examined files was reported as: 6461)

  • 19:19 added again the 5 (jpg) files I had removed in the second to last step: Job didn’t even start, instead threw a Fatal Error: “Unexpected difference in fileset version 0: 22.06.2020 19:03:53 (database id: 164), found 6842 entries, but expected 6868”

BTW: I didn’t get to playing with “–asynchronous-concurrent-upload-limit” but I doubt it has any effect, since up to now there was never a problem (and other jobs work OK).

… I give up …

Results of more testing:
In several steps I deleted versions back beyond the last good run of the job. While processing, the GUI displayed a lot of files that are ages old, and I wondered why they should be considered at all.
While the DELETE jobs seem to have run OK (except for the glitch with the version names), all backup jobs failed with a closed connection, as before. The progress indicator in the last run even displayed a ridiculous state before the job timed out:
minus bytes to do
While it usually hung at some 380 files and 0 bytes to do, this time it even went far below zero …

I’m glad this isn’t my only backup path for my data. Guess there is no way but to ditch all this crap and create a new job from scratch.
Let me repeat that 2 other jobs reading from the same NAS and writing to the very same destination cloud space continue working like a charm.

This indeed seems to be the clue. The behavior became clear with the following test setup in one of my well-behaving jobs - let’s call it “id2”. I gradually added new source files - so that there is something to do for the job - and inspected the log file with detail level “retry” :

  1. re-run id2 after a successful backup, with setting of --asynchronous-concurrent-update-limit=2. [Let’s call it “acul” for shortness], completed successfully
    ==> log: no “Put” events, just a few “Gets”
  2. added folder with 9x 1MB + folder with 7x 10 MB testdata. run id2, completed successfully
    ==> log: 3 “Put” events started and completed (with sizes of ~9 kB … ~10MB)
  3. added file with 5MB + subfolder with 1x 100 MB testdata. run id2, completed successfully
    ==> log: 3 “Put” events with 50MB, ~39MB and 227kB. The latter one completed immediately.
    BackendUploader-RetryPut of the 50MB block gave up after ~ 1 min (“attempt 1 of 5 failed”) because the connection was closed by Remotehost (!). The next message - immediately after the callstack - indicated that there was another retry of the same block:
    … BackendEvent]: Backend event: Put - Retrying: <50MB block>
    The 39MB block and another small (kB) one were started and completed.
    Then the 50MB block got renamed - and got successfully uploaded (completed) under the new name.
    {I would happily upload the log file here if I knew of an option to do so}

After this insight I bumped the acul value for my initially failing job to 1 and hooray - it completed!

So it seems that as soon as there are large enough new blocks to upload, a acul value of >1 gives problems. Retry sometimes can handle, but sometimes not, when there’s too many blocks trying to get uploaded concurrently and/or when the acul value is too high (2 may be OK, but not higher?).
The explanation why the “well-behaving” jobs continued to run, even with acul=4 is now clear: As long as the total size of newly added source files doesn’t exceed the size of one block (or 4 blocks?) and upload can be handled without concurrency, the job completes OK. But as soon as I add files of total size larger than a few blocks, retrying gets stuck and the upload fails.

The questions however remain:

  • Is this really only a probem on the remote end? Then why did all these jobs work OK with the default value of acul in the past?? For example on May 16 I added a branch with some 6 GB to one of the jobs, and it then completed successfully.
  • Can I collect any more details on the reason for the connection failure?

FWIW: I checked with Deutsche Telekom as my provider of the destination (cloud) space, asking for log files. They say they don’t keep such logs - and after all this was a message from Duplicati. “Why don’t you ask them?” I didn’t really expect anything more helpful than this …

You can usually post files to the forum by dragging them in from File Explorer, but file types are limited.
Posting text with three backquotes (```) above and below is a good way to get modest-sized things up.

If they’re closing connections on you, it likely is (unless protocol is being badly run, and offending them).

You can test the remote-timeout theory another way, by throttling upload, e.g. by using Advanced option –throttle-upload or the speed control near top of web UI screen. 2.0.5.1 mistakenly slows download too.

If you don’t believe the error message on what closed, low-level networking analysis may be required…

In the further past, e.g. 2.0.4.x, there were no parallel uploads. Not sure what version you had on May 16.
If you have logs (or emails of logs), you could see what the value was then for RetryAttempts in job log.

Mentioned above, you can try to force a timeout to see if remote has one. The exact default on Duplicati end for timeout may vary with storage destination type, but generally seems to hit OneDrive the hardest.

HTTP options might affect WebDAV, because it’s HTTP-based, so you could try setting these very high:

http-operation-timeout is the one that OneDrive sometimes needs raised.
http-readwrite-timeout is needed rarely-to-never, but I’ll mention it anyway.

If you want to study your current retry and failure pattern, one interesting case was in original post, at

2020-06-22 11:44:37 +02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b117da5947e7a4f1eb7e8868cb79855cc.dblock.zip.aes (49,95 MB)
2020-06-22 11:45:37 +02 - [Retry-Duplicati.Library.Main.Operation.Backup.BackendUploader-RetryPut]: Operation Put with file duplicati-b117da5947e7a4f1eb7e8868cb79855cc.dblock.zip.aes attempt 1 of 5 failed with message: Von der Übertragungsverbindung können keine Daten gelesen werden: Die Verbindung wurde geschlossen.
System.IO.IOException: Von der Übertragungsverbindung können keine Daten gelesen werden: Die Verbindung wurde geschlossen.

which made me wonder if something was running a 60-second timeout, but one timing isn’t much proof.

Wireshark can see the TCP-level close (even though data content is encrypted), if you’d like to do that…

EDIT:

“In die Übertragungsverbindung können keine Daten geschrieben werden: Eine vorhandene Verbindung wurde vom Remotehost geschlossen” does not appear in the Duplicati code, as suspected, meaning it was declared by Microsoft .NET Framework, or Microsoft Windows. Google search finds programs that probably are not .NET Framework applications, so my guess is Windows. Feel free to research further.

Thank you @ts678 for all your comments and suggestions.
Before planning for a more systematic test setup, maybe you can help me also to clarify some usability questions, which would help me get the test runs better documented:

  • is there a way (= option?) to dump/record into a log file all the options used in the current run (advanced, job specific + global)? If I play with timeout and other options, it’s tedious to manually take notes of all the options used for each run.
  • In this context I browsed through the advanced options and ran across –log-retention, which suggests that log data is stored “in the database”, besides the log files. Do you have (a pointer to) more details on this?

Unfortunately I never had a need for log files because all jobs ran smoothly. So I don’t have any logs from before the first problem (on Jun 18). This ties back to my question above: How could I have possibly got the version info (+ all options) written into such a log? Or is there another way of determining when I performed the last update of the installation (file time stamps?)?

Looking through my logs, I found indeed a pattern of alternating read and write failures for the different blocks, where the failure occured exactly 60s after the put operation fo the corresponding block started:

id19 T1 07-02 crop.zip (4.4 KB)

In the log I deleted the callstack messages except for begin and end, for clarity. You can see that the messages are different and alternating: After a read failure, the stack trace starts with “Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde” (= end of stack trace from the previous location, which threw the exception). After a write failure (and closure by remote host), the stack trace starts with “Ende der internen Ausnahmestapelüberwachung” (= end of internal stack exception trace).
BTW you can ignore the end of the log, this was probably when I aborted the job.
On a curious note: Duplicati counts up to “attempt 6 of 5” before it gives up :open_mouth:

EDIT: After browsing through some more logs, I also ran across a pattern where there are several “read” closures in a row before there is one closure by remote host (and more read closures follow). So occurrence of one of the other seems to be erratic …

Not an easy way, and if you use global then it gets harder. I’m also not sure if GUI throttle control shows up anywhere. Global options are not part of Export As Command-line, but do merge into GUI Commandline or regular GUI runs. Options are also available in run-script-before as environment variables, which I suppose could be dated and dumped by a batch file or something to a separate log, or if truly ambitious, please see:

Where does script output go to (difference between BEFORE and AFTER?)

Viewing the log files of a backup job refers to them as files, but they’re actually in the database table called LogData in the Message column if you really want to get at them in a different way using some SQLite tool.

DB Browser for SQLite is one, but there are others around. Some can be scripted if you’re very ambitious.

These are separate from the normal text log files you might make yourself using the log-file and its friends.

This is the easy part, as it’d be in the Full log as Version if you had such old logs, but I guess you don’t.

This depends on how you updated. If by .msi install manually, Program Files\Duplicati 2 timestamps should be fine . If you let Duplicati update based on web UI prompt, then Program Files would be the base version, and it would search for and run the latest update, so you’d have to go looking for the latest update.

Downgrading / reverting to a lower version suggests its places, depending on how Duplicati was installed.

About → System info will show your BaseVersionName and ServerVersionName. May or may not differ…

A more compex way to tell what version of Duplicati was in use when is to pick a destination file with a file time of interest (don’t look at the file time on the dlist file name), decrypt it if needed, unzip it, and look at a manifest file. AES Crypt or Duplicati’s CLI SharpAESCrypt.exe can decrypt if needed. Example manifest:

{“Version”:2,“Created”:“20200707T125024Z”,“Encoding”:“utf8”,“Blocksize”:125952,“BlockHash”:“SHA256”,“FileHash”:“SHA256”,“AppVersion”:“2.0.5.107”}