Error ID9 during Backup in Onedrive

Duplicati - 2.0.4.22_canary_2019-06-30
Backup stops always with the following error:
3. Jul. 2019 12:47: Failed while executing “Backup” with id: 9
Data should be backuped in Onedrive-Cloud. Test of connection is successfull

**From Protocol Data:**
    System.AggregateException: Mindestens ein Fehler ist aufgetreten. ---> System.AggregateException: The channel is retired ---> CoCoL.RetiredException: The channel is retired
       bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
       bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
       bei CoCoL.Channel`1.<WriteAsync>d__30.MoveNext()
    --- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
       bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
       bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
       bei Duplicati.Library.Main.Operation.BackupHandler.<FlushBackend>d__18.MoveNext()
    ...
    ---> (Interne Ausnahme #1) System.AggregateException: Mindestens ein Fehler ist aufgetreten. ---> System.TimeoutException: HTTP timeout 00:01:40 exceeded.
       bei Duplicati.Library.OAuthHttpClient.<SendAsync>d__4.MoveNext()

Is this a timeout of OAuth?
Thanks for help.

Anyone any idea?
Backup with the same configuration to a local disk works fine, so it shoul be a problem with Onedrive.
As mentioned before, the test of the connection to Onedrive during config is fine.

It seems a timeout of something per " System.TimeoutException: HTTP timeout 00:01:40 exceeded.", however it’s more likely a 50MB (unless you increased your Remote Volume Size) trying to upload on connection that runs below 4 Mbits/second (500 Mbytes/second). Can you run an Internet speed test?

You can also watch your dblock upload times with the server log About --> Show log --> Live --> Retry.

Use –http-operation-timeout in backup Advanced options to set a value over 100 seconds if necessary.

If you need more logging (e.g. to watch long-term or to see what’s really going on leading to the timeout, –log-file and –log-file-log-level=retry can catch more, or was there more that just wasn’t posted before?

The “id: 9” is meaningless because it’s the number of this backup. You can likely see “9” in UI URLs too.

Was this always 2.0.4.22, or were you previously on something that didn’t time out? 2.0.4.22 (and actually going back to 2.0.4.16 added parallel uploads, but if the uplink is the bottleneck, each transfer gets slower.

–asynchronous-concurrent-upload-limit controls this. It defaults to 4, but you can use 1 if it avoids timeout.

Thanks for help, ts678!
Upload speed is 2,5MBit/s. Remote Volume Size is 100MB.
Backup stops always with version 2.0.4.22.
With older version I never had problems with timeout or anything else. Same configuration & same Upload speed. That’s why I’m unsure what the problem could be.
I think, I updated from an older version than 2.0.4.16.
I will check the server log and look for the timeout setting. Will come back …

You can check, but I doubt you’ll find it there, especially if it’s at the one-size-doesn’t-fit-all default value:

https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httpclient.timeout?view=netframework-4.8

OAuthHttpClient is built on HttpClient, but I suspect dblock upload because the connection test worked.

Given your upload speed, a dblock would need at least 320 seconds. I’m not sure how it worked before.

I changed “http-operation-timeout” to 5 minutes (300 seconds) and “asynchroneous-upload-limit” to 2. Now it works fine. :slightly_smiling_face:
That means http-operation-timeout has to be set to Remote volume size / Upload-Bandwith in s:
100MB / 0,25MByte/s = 400s? Should I increase my setting to this value.

It worked fine (with older versions) for more than a year now. I didn’t change the configuration. Upload speed was always the same. I did’nt know that I have to set http-timeout parameter.
Thanks again for your help. I’m happy that it’s running again. Will play with some parameters to find out where the limits are for timeout.

Increase it to however long it actually takes to do 100MB, plus some margin in case the speed fluctuates. Commonly an ISP for residential use won’t promise a speed, so you get an “up to” speed. Also, Duplicati running a single upload at a time can’t show as high as the typical Internet speed test which uses parallel transfers. A single transfer at a time is subject to latency limitations based on the speed of light, and ISPs enjoy being able to get a higher number – which does fit many real-world cases e.g. simulaneous videos.

My math from the 2.5Mbit/sec you told me was 100MB at (2.5/8=0.3125)Mbytes/second = 320, but single stream runs slower, so 400 seconds might be about right if you set --asynchronous-upload-limit=1. With two simultaneous uploads, it could maybe take twice as long, if it happens to upload two dblocks at once.

I calculated with 2,5MBit / 10 -> 0,25MByte to take in account all overheads.
But you are right. To be safe I should set the asynchronous- upload-limit to 1 - until we have a better upload speed (hope so before I die).
In this Link I found a comment from JonMikeIV that the standard value for timeout is 10 minutes. But I guess he was wrong?
My big thanks to you for the fast and competent help!