Hello, I have been using Duplicati for some time using the Microsoft OneDrive v2 storage type without issue. However, lately my backup has started failing due to what seems OneDrive claiming that Duplicati is making too many requests - see error at the bottom of this post.
Assuming that nothing has changed on the Duplicati, I suspect that Microsoft have introduced some rate limiting.
Is the throttle-upload the best setting to use to try and fix this? I don’t seem to have the information necessary to choose the correct value and OneDrive is complaining about the number of requests rather than the upload rate. Currently it looks like in average I am performing the backup at 82475 KB/s so I guess I can start throttling below this value.
Has anyone else experienced this and how did you fix this?
It might be helpful to see timing details, for example with log-file=<path> and log-file-log-level=verbose. Throttle Not Working was a big thread on this, where somebody was trying to get a steady upload rate.
Although a code change was made at the end (more on that later), an early thought was to tune things.
--fragment-size (Integer) Size of individual fragments which are uploaded separately for large files. It is recommended to be between 5-10 MiB (though a smaller value may work better on a slower or less reliable connection), and to be a multiple of 320 KiB. Default value: 10 MiB
was mentioned by the developer, but there are also --fragment-retry-delay and --fragment-retry-count available. These are OneDrive-specific, and there are also file (not fragment) level generic options number-of-retries and retry-delay. Your message says Retry-After: 55. The code change I can see is mentioned here in the thread, and discussion continues at PR Try to fix OneDrive upload throttling #4469.
Fixed throttling requests to OneDrive and respecting the server retry-after headers, thnaks @tygill
but the catch with this way is that the change is not yet in a Beta. If you want to run the latest Canary and decide more carefully (for example by watching form (especially Releases category for added breakages, perhaps that may work. In a later Canary, there’s another change to do retries the way some servers like:
Added exponential backup for retries, thanks @vmsh0
If you decide to try the OneDrive-specific options, adding them on the Options screen is safer, as additions done on the Destination screen sometimes get lost. While on Options, make sure that the Remote volume size is something not hugely different from default 50 MB. Very large values might complicate this situation.
I suppose there’s also a chance that waiting a day will see a better situation if something resets or clears…
Thanks very much for your detailed response. Whilst reading through the PR you linked I saw mention of use-http-client and that reminded me that I had set this to true over a year ago when I first started using Duplicati and had problems that were resolved by doing so.
I have now removed that setting and it seems my backup finished successfully. Things look OK now but I’ll keep an eye out on it.
I am using the latest canary version ([v2.0.6.104-2.0.6.104_canary_2022-06-15), and I am having the same issues. I tried turning off/on the use-http-client but it did not make any difference.
Did the rebuild really help? I get an error even if I just try to restore
Do we have some theories why this problem doesn’t affect everyone ?
Maybe the problem is only with backups from a certain size (number of files) or something like that?
According to the MS365 Admin Center, my data is located in Australia. So, it would seem that the issue we are having is not related to the datacentre location.
mr-flibble, as I mentioned, I am storing two different backups in OneDrive. The backup that is with the largest number of files and amount of data is working OK. I am having issues with the smaller one. Also, the file size for the two backups is the standard Duplicati file size (50MB).
and if it doesn’t help, an idea might be to see if setting this to 1 helps Duplicati please OneDrive:
--asynchronous-concurrent-upload-limit (Integer): The number of concurrent
uploads allowed
When performing asynchronous uploads, the maximum number of concurrent
uploads allowed. Set to zero to disable the limit.
* default value: 4
because there are parallel uploads by default (for more speed, while OneDrive may wish less…).
Unless Duplicati coordinates uploads, OneDrive might perceive lack of respect for its Retry-After.
Actual behavior could probably be observed in a log-file=<path> at log-file-log-level=profiling, then
be ready to open a very large file to look for the below. Or use line search (find, findstr, grep, etc.).
log-file-log-filter can isolate RetryAfterWait tag if you like, but lose context. That might be good or bad, because I don’t know whether context matters to OneDrive. Maybe they just see Retry-After ignored?
I stopped using OneDrive for Business as backup target since I kept having issues with it. Last week, I tried again (after 3 months) and it seems to work again, without any change from my side.
Just wanted to share, and see if it has been resolved for others as well?