I’m a CrashPlan orphan that is setting up a new backup strategy. So far, I like Duplicati a lot. However, I have a problem when backing my VMs hosted on OVH.
I’m running Duplicati - 2.0.2.1_beta_2017-08-01 on a Centos 7.0 Server. I’m trying to backup to Open Stack Simple Storage, specifically on OVH Cloud storage. I’m able to run most of the backup without problem. However, I get the following error :
Failed to process file duplicati-20171121T213017Z.dlist.zip.aes => The remote server returned an error: (429) Too Many Requests.,
Failed to process file duplicati-ib873744bdee448cba80263f610a01ad4.dindex.zip.aes => The remote server returned an error: (429) Too Many Requests.,
Failed to process file duplicati-b42c8c4af0c4a4b699b88292a29ebf1c7.dblock.zip.aes => The remote server returned an error: (429) Too Many Requests.
Is there anyway to throttle how fast Duplicati will send requests to the storage server?
It might be possible, but since it’s OVH’s stack, I don’t have control over it. I’m pretty sure that if OVH is throwing this error, Duplicati is being quite aggressive.
That said, I did try to throttle the bandwidth. It did not help the situation.
I’m in the exact same situation with the exact same problem… I was using Crashplan and picked OVH as my storage provider. And I’m getting the same “too many requests” error.
It work for the first backup, but a second backup will produce this error.
I contacted the OVH customer service and they told me it’s not something they can changed on their side, the software need to be changed to support server-side throttling.
I’m on Windows and the latest beta (2.0.2.2 I think) didn’t solve this problem.
I know this post is old, but things might have changed on Duplicati side, not OVH thought… I encounter the same error for the same reasons.
On version 2.1.0.5_stable_2025-03-04of Duplicati
In my case, it occurred during the verification of the files.
Around internet, I found out that I might be related to the generation of tokens. There is a limit of 60 tokens per 60 seconds without any way to tweak this value. The source in French
I am currently trying with the setting concurrency-max-threads set at 4 (which is anyway nicer for my poor little Raspberry Pi).
For a small back-up (1.50 GB), it worked marvel. I am currently running a larger one (64GB) and will report tomorrow if it worked.
After a good night’s sleep
So, all my backups ran, which represents around 115 GB, the error raised for some of them, but not all, even though Duplicati reports all backup successful.
My assumption is that Duplicati is trying later to re-verify the files, the OVH quota reset in the meantime so it is successful.
So, solved? Maybe… But not really a production-grade solution…
Other APIs have their own limits. It’s not clear which one affects Duplicati’s usage.
Possibly Duplicati’s Swift storage upload rate is limited by the Keystone rate limit?
Although I couldn’t find OVH docs (maybe someone else can), Swift can rate limit.
Rate limiting is not ideal even if there’s a configurable way to set it. If one assumes that Keystone was the limit, how did Duplicati do 60 requests per minute for authentication?
Duplicati has a retry mechanism with 5 retries by default, after 10 second default delays.
I’d sort of hope that HTTP errors (especially temporary 400-series status) would do retry.
This can be seen in job log Complete log as RetryAttempts. Did failed backups retry?
--number-of-retries (Integer): Number of times to retry a failed transmission
If an upload or download fails, Duplicati will retry a number of times before
failing. Use this to handle unstable network connections better.
* default value: 5
--retry-delay (Timespan): Time to wait between retries
After a failed transmission, Duplicati will wait a short period before attempting
again. This is useful if the network drops out occasionally during transmissions.
* default value: 10s
--retry-with-exponential-backoff (Boolean): Exponential backoff for backend errors
After a failed transmission, Duplicati will wait a short period before attempting
again. This period is controlled by the retry-delay option. Use this option to
double that period after each consecutive failure.
* default value: false
Log of tries and retries available by log-file=<path> with log-file-log-level=retry.
In the issue I suggested dealing with both 429 responses and having a rate limit for manual control. This would IMO solve it for most cases.
Each backend instance will do an authentication, so if you have the default 4 concurrent uploads, that is 4 authentications. If they fail, a new backend instance will be created. With a 10s delay, you could get 4/10 authentications per second, or 24/min. Not close to 60, and only if they all fail.