Throttle Not Working

I’ve recently installed this on my UNRAID server using linuxserver/duplicati docker.

Duplicati - 2.0.5.1_beta_2020-01-18 is the version it says.

I’m trying to setup a max upload of 10Mbps but no matter what I set, it keeps maxing out my upload connection. Even setting it to 100Kbs.

Any ideas? I’ve tried also enabling download throttle (since there is an old issue and that was recommended) but didn’t change anything.

Welcome to the forum @Fiala06

You could try setting below Option to 1. That helps with some destination types. What’s yours?

  --asynchronous-concurrent-upload-limit (Integer): The number of concurrent
    uploads allowed
    When performing asynchronous uploads, the maximum number of concurrent
    uploads allowed. Set to zero to disable the limit.
    * default value: 4

EDIT: Found the option you listed. I’ve set it to 1 and still maxing out my upload.

Thank you. I’m using OneDrive for the destination. I’m not sure where to define that option but I have noticed its the same for Google Drive.

Working fine here on 2.0.5.1 to Google Drive. How are you measuring? Here are Duplicati and Task Manager. Actual network traffic is a bit bursty, but OneDrive v2 looks bumpier without config tweaks.

image

EDIT: Note my setting is 100 KBps, which is 800 Kbps (bytes versus bits) – did you account for that?

The GUI explains the adjustment better:

image

Directly from my router, Unraid server and individual docker monitoring. All point to the same thing, duplicati.

Then let’s see what Duplicati sees, to study its perspective. This is long, but at least do some of this.
Clearly I can’t observe your system, but it seems like you would be able to look at Duplicati, correct?

What does Duplicati’s status bar say (similar to my screenshot), is this at asynchronous-concurrent-upload-limit of 1, and which destination are you using? What happens if you change the throttle rate?

If you would rather not impact your regular backup, you can set up a test backup to do settings tests, sometime when the regular backup is not running. Note that you can still throw off the schedule, e.g. extremely low throttles may make the test job run over, which might impact the regular job start time.

Since you seem router-savvy, I’d note that smoother flows could be created by using router facilities, because that’s a router specialty. Duplicati should be able to do average rates well though. For QoS:

came at the end of a technical dive proposing a OneDrive-specific tuning option to reduce burstiness.
Ultimately the OneDrive setting sounds like it wasn’t used because the router method was doing well.

In addition to viewing the Duplicati status bar (like I showed) to see what it thinks its uploading rate is, detailed results information on whole file uploads is available in a couple of other ways. No-setup is at About → Show log → Live → Information and watch the files go up. Most interesting are large dblocks whose size is configurable in Options (Remote volume size). Watch times, see if rate is too high after setting up the other suggested options, possibly at lower values. You now asked for 2 MBytes, got 3.4, however settings questions remain. Regardless, what happens if you ask for 1 MByte? A slower rate?

Another way to see the upload information to compute a rough rate is in <job> → Show log → Remote where the dblock rate should be the main factor because the dindex files for dblocks are much smaller:

This is kind of rough and requires you to do the math, but a better reading requires looking in log file, e.g.

2021-02-13 21:23:24 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b7c6d668bcc8b414a887d64cda47db334.dblock.zip.aes (4.92 MB)
2021-02-13 21:24:15 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 4.92 MB in 00:00:50.9390276, 98.81 KB/s
2021-02-13 21:24:15 -05 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b7c6d668bcc8b414a887d64cda47db334.dblock.zip.aes (4.92 MB)

I was running a 5 MB remote volume size to make sure I got a good flow of dblocks with small total data.
Here, Information level time has seconds, not minutes as in live log. Doing my own math, I can compute seconds as 51, and estimate bytes as 5 * 1024 * 1024 = 5242880. 102802/second nears 100 KB target.

Options would be log-file=<path> and log-fle-log-level=information. Retry is similar but shows retry action. Profiling is huge output. You can instead use log-file-log-filter to filter interesting lines, e.g. *UploadSpeed:

2021-02-14 10:27:42 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 49.90 MB in 00:08:32.0315811, 99.80 KB/s
2021-02-14 10:27:43 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 43.06 KB in 00:00:00.8976540, 47.97 KB/s
2021-02-14 10:28:07 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 2.32 MB in 00:00:24.2611805, 97.93 KB/s
2021-02-14 10:28:08 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 22.31 KB in 00:00:01.0062725, 22.17 KB/s
2021-02-14 10:28:09 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 6.45 KB in 00:00:00.9460525, 6.82 KB/s

The default-size dblock got throttled down, otherwise it would have been about 700 KBytes/s on my line.
Mine is Windows 10, Google Drive, GUI throttle at 100 KBytes, and 1 concurrent upload to keep it simple.
If you’re seeing different results, let’s figure out where things went different, starting from Duplicati’s data.

I did backups to WebDav before and there the throttling worked as expected. I recently had to switch and moved to OnedDrive. The issue of throttling not working seems to only impact OneDrive. I now configured OneDive as follows:

–fragment-size=983040
–fragment-retry-count=10
–asynchronous-concurrent-upload-limit=1

It seems that after each fragment uploaded the speed is checked. So with the 983k fragements the upload only reaches about 1MB/s and then that fragement is finished and it will then check configured upload limit and either wait or continue.

There is another issue with OneDrive and not using the Retry-After header: OneDrive respect "retry-after" header · Issue #4438 · duplicati/duplicati · GitHub

I’m not sure whether you’re trying to make another real-time upload happy or just avoid hogging. Hogging can probably be avoided by throttling down, but the minimum burstiness probably comes from Microsoft:

Upload bytes to the upload session

the size of each byte range MUST be a multiple of 320 KiB (327,680 bytes).

Do you think that’s relevant? Issues exist (as you can see from GitHub). Volunteers are much needed.

My main goals was to not mess up the 4 MS Teams/Zoom sessions we have with the current lockdown. So I wanted minimum impact on the upload of my cable. So I was playing around with that.

My 983040 fragment size is 3x the 320 KB (327680 bytes) so that should be fine. This causes my peakes to be within my limits.

I will check if I can contribute to the retry-after issue.

That’s good to hear, and you have a bit more room to play with if you ever require smaller peaks.
I’m not terribly expert in all the code internals, I’m afraid (GitHub has people, but not enough), but

is what your option dials back to a throttling-friendlier peak. I don’t know what other backends do.
I thought possibly that could be investigated – someday in the future when hotter issues are few.

Thank you. Pull requests are appreciated. Duplicati now has a fixed retry-delay between its tries.
Use exponential backoff for retries #2399 requests what some destinations tell their apps to use.

Any pointers to where in the code the retries are managed?

Generally my approach is to start from the option, then look for a run-together version of it. This leads to:

however Microsoft Graph backends seem to have an additional level that I can’t explain, but will point to:

There is a defined interface (which gets expanded on for backends with additional capabilities) defined at

https://github.com/duplicati/duplicati/blob/master/Duplicati/Library/Interface/IBackend.cs

and I’m not sure how much unwanted isolation that might add going between the code areas. If this gets deeper, you might want to ask in Developer category of the forum, which some developers might visit…

EDIT:

I’m not sure if it fits or will help here, but my hints about the lower-level HTTP response handling are here.