OneDrive Throttling

Hello, I have been using Duplicati for some time using the Microsoft OneDrive v2 storage type without issue. However, lately my backup has started failing due to what seems OneDrive claiming that Duplicati is making too many requests - see error at the bottom of this post.

Assuming that nothing has changed on the Duplicati, I suspect that Microsoft have introduced some rate limiting.

Is the throttle-upload the best setting to use to try and fix this? I don’t seem to have the information necessary to choose the correct value and OneDrive is complaining about the number of requests rather than the upload rate. Currently it looks like in average I am performing the backup at 82475 KB/s so I guess I can start throttling below this value.

Has anyone else experienced this and how did you fix this?


Duplicati -

TooManyRequests: error from request https: //$skiptoken=<snip> 
Method: GET, RequestUri: 'https: //$skiptoken=<snip>', Version: 2.0, 
Content: <null>, Headers: { 
  User-Agent: Duplicati/ Authorization: Bearer ABC...XYZ
StatusCode: 429, ReasonPhrase: '', 
Version: 1.1, 
Content: System.Net.Http.StreamContent, Headers: { 
    Cache-Control: private Transfer-Encoding: chunked Retry-After: 55 Strict-Transport-Security: max-age=31536000 request-id: <snip> client-request-id: <snip> x-ms-ags-diagnostic: {
        "ServerInfo": {
            "DataCenter": "UK South",
            "Slice": "E",
            "Ring": "3",
            "ScaleUnit": "000",
            "RoleInstance": "LN2PEPF000095A2"
    } Date: Wed,
    28 Sep 2022 07: 52: 33 GMT Content-Type: application/json
} {
    "error": {
        "code": "activityLimitReached",
        "message": "The request has been throttled",
        "innerError": {
            "code": "throttledRequest",
            "innerError": {
                "code": "quota"
            "date": "2022-09-28T07:52:33",
            "request-id": "<snip>",
            "client-request-id": "<snip>"
1 Like

No, bandwidth throttling does not help here. I have throttled to 1KB/s and I get the same “too many requests” error.

Is there any way to influence how many requests Duplicati is making?

It might be helpful to see timing details, for example with log-file=<path> and log-file-log-level=verbose.
Throttle Not Working was a big thread on this, where somebody was trying to get a steady upload rate.
Although a code change was made at the end (more on that later), an early thought was to tune things.

Microsoft OneDrive v2 (Microsoft Graph API)

  • --fragment-size (Integer) Size of individual fragments which are uploaded separately for large files. It is recommended to be between 5-10 MiB (though a smaller value may work better on a slower or less reliable connection), and to be a multiple of 320 KiB. Default value: 10 MiB

was mentioned by the developer, but there are also --fragment-retry-delay and --fragment-retry-count available. These are OneDrive-specific, and there are also file (not fragment) level generic options number-of-retries and retry-delay. Your message says Retry-After: 55. The code change I can see is mentioned here in the thread, and discussion continues at PR Try to fix OneDrive upload throttling #4469.

Fixed throttling requests to OneDrive and respecting the server retry-after headers, thnaks @tygill

but the catch with this way is that the change is not yet in a Beta. If you want to run the latest Canary and decide more carefully (for example by watching form (especially Releases category for added breakages, perhaps that may work. In a later Canary, there’s another change to do retries the way some servers like:

Added exponential backup for retries, thanks @vmsh0

If you decide to try the OneDrive-specific options, adding them on the Options screen is safer, as additions done on the Destination screen sometimes get lost. While on Options, make sure that the Remote volume size is something not hugely different from default 50 MB. Very large values might complicate this situation.

I suppose there’s also a chance that waiting a day will see a better situation if something resets or clears…

Thanks very much for your detailed response. Whilst reading through the PR you linked I saw mention of use-http-client and that reminded me that I had set this to true over a year ago when I first started using Duplicati and had problems that were resolved by doing so.

I have now removed that setting and it seems my backup finished successfully. Things look OK now but I’ll keep an eye out on it.

1 Like

Same problém on on “latest” ( beta.
I never used use-http-client parametr, but changing it to true or false did not solve issue.

I try to wait some mor day to see whether it won’t fix itself on MS side. If not, then updating from beta to canary is probably only solution

I am using the latest canary version ([v2.0.6.104-, and I am having the same issues. I tried turning off/on the use-http-client but it did not make any difference.

1 Like

I have the same issue. Tried a few upload and download throttling options but the error remains.

1 Like

At one point, I deleted the database and rebuilt it. This did fix the issue for one or two backups. Then, it came back.

The funny thing is that I do two backups into OneDrive. Only one is failing.

I’ll have to move to another storage provider if I can’t get this to work soon :cry:

Did the rebuild really help? I get an error even if I just try to restore :smile:

Do we have some theories why this problem doesn’t affect everyone ?
Maybe the problem is only with backups from a certain size (number of files) or something like that?

I did wonder if it could depend on what datacenter you connect to? Each one of my failed backups were connecting to “UK South”:

{“ServerInfo”:{“DataCenter”:“UK South”,“Slice”:“E”,“Ring”:“3”,“ScaleUnit”:“002”,“RoleInstance”:“xx”}}

{"ServerInfo":{"DataCenter":"West Europe","Slice":"E","Ring":"5","ScaleUnit":"001","RoleInstance":"xx"}}

According to the MS365 Admin Center, my data is located in Australia. So, it would seem that the issue we are having is not related to the datacentre location.

mr-flibble, as I mentioned, I am storing two different backups in OneDrive. The backup that is with the largest number of files and amount of data is working OK. I am having issues with the smaller one. Also, the file size for the two backups is the standard Duplicati file size (50MB).

1 Like

I am getting the same error even while rebuilding the database.

Did anyone found a solution or workaround already?

Yes, it’s working fine for me now but perhaps my case is not common.

Thanks @pepperywasp . I’ve upgraded to the latest canary and the backup completed successfully last night!

I’m glad it helped. A less rosy result:

and if it doesn’t help, an idea might be to see if setting this to 1 helps Duplicati please OneDrive:

  --asynchronous-concurrent-upload-limit (Integer): The number of concurrent
    uploads allowed
    When performing asynchronous uploads, the maximum number of concurrent
    uploads allowed. Set to zero to disable the limit.
    * default value: 4

because there are parallel uploads by default (for more speed, while OneDrive may wish less…).
Unless Duplicati coordinates uploads, OneDrive might perceive lack of respect for its Retry-After.

Actual behavior could probably be observed in a log-file=<path> at log-file-log-level=profiling, then
be ready to open a very large file to look for the below. Or use line search (find, findstr, grep, etc.).

log-file-log-filter can isolate RetryAfterWait tag if you like, but lose context. That might be good or bad, because I don’t know whether context matters to OneDrive. Maybe they just see Retry-After ignored?

Updating from (beta ) to latest canary and setting –asynchronous-concurrent-upload-limit (maybe not needed) solved problem for me.

1 Like

I stopped using OneDrive for Business as backup target since I kept having issues with it. Last week, I tried again (after 3 months) and it seems to work again, without any change from my side.

Just wanted to share, and see if it has been resolved for others as well?