Upload Throttle not working

1500 KB is 1.5 MB. 1.64 MB is very close to what you set.

I just tried Google Drive at 100 KBytes/sec up and down, and saw peak uploads of 1.1 Mbit/sec, which is a little higher than the 800 Kbit/sec that the math might call for, but I think it’s an average, after gaps.

image

I think below speed from –log-file-log-level=profiling is how a requested 100 KByte/sec turned out:

2020-01-24 18:42:33 -05 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 9.86 MB in 00:01:44.5579796, 96.54 KB/s

You might get spikes if the network gets slow for awhile, then there can be a catch-up race to average. You might be able to force this by pulling a network cable for a bit, but not so long as to fail the upload.

Duplicati does not have network-level control of the sending speed. If you need that, your router might.

I don’t know why anything changed since your prior version, but there were definitely changes such as parallel uploads that possibly modified some behaviors. I don’t have Jottacloud, so I can’t test with that.

Same problem for me,
Duplicati in docker container on Unraid, backing up to OneDrive and the throttling won’t work and I can not stop/pause the transfer in UI, taking all my bandwidth on network.

Solution that seems to work for me is as “ts678” mentioned, setting both upload and download throttling as well as setting “asynchronous-concurrent-upload-limit” to 1.
Kru-x

1 Like

My Google Drive test left --asynchronous-concurrent-upload-limit at default 4. I don’t know the exact algorithm (and have not tested at all settings), but a good plan might be to split the allowed throttling across however many concurrent uploads are allowed. Concurrent uploads are more for people who desire more bandwidth used. Network latency limits how much can be pushed over one connection.

So I guess we can say that a flat “not working” isn’t the case, but I’m not sure what exact issues exist.
If anyone characterizes it well enough, the best place to get it in the line for a fix someday is in Issues.

Jumped the gun too soon, suddenly Duplicati is using all my bandwith and I have to shut down the container to get it back again, still says download limit to the set value and I can not pause it or stop it from the UI.

@kru-x You may want to update the container. I’m running the linuxserver.io container on Unraid and updated about 3 hours ago. Since then I have quite steady speeds between 1.44 and 1.47 MB/s which is what I expected. I’m not sure if the container update is the reason because, I forgot to restart the upload after I changed the parallel uploads to 1, but at the moment it works as expected since a few hours ago. I’ll have to see if it stays like this.

If there’s a container-related issue, I’m not set up to look into those. It’d be kind of puzzling, because I think throttling is done by Duplicati writing data into (or reading from) OS’ networking at specified rate. Beyond that, it’s up to the OS, the router, etc. to do low-level data moves. Do containers change that?

I did think of other possible causes of spikes, although I suspect the main goal was average speed of upload and download, possibly not including authentication delay, and maybe not accounting for TCP slow start. Some short operations such as authentication and directory listings may also be full speed.

Doing a thorough analysis would probably need watching About --> Show log --> Live --> Profiling and the network speed at the same time, trying to look for spikes, guessing at cause, and maybe changing.

There are a few Storage Destination types that can’t throttle, but JottaCloud should as it’s one of these:

Good luck on the tests, welcome to the forum @kru-x, and please file an Issue if bug gets more solid.

From what I know about containers I think they can be made in a way so that they can modify data Streams at quite a low level. But I don’t think there would be a reason the create this container in such a way. The restart of the download probably helped more than the container update. I’ve seen consistent speeds all day. I do still have spikes, but they are smaller, less than a second and way less common. So now it is working as intended. Thank you for your help :smiley:

1 Like

I have this issue with Jottacloud too. Nothing will stop it using all available bandwidth.

Edit: Okay, it works if:

  • Stop upload
  • Set throttle
  • Set threads to 1
  • Restart upload

So you can’t change it during an upload. Also stopping the upload completely broke the backup, I had to delete and recreate it.

What Duplicati version? Before 2.0.5.1, there was an issue opened on throttle direction confusion:

--throttle-upload affects restore download performance #3272

2.0.5.1 seems to confuse them differently. If symmetric speeds are OK, set upload AND download.

Regarding stopping, which variety did you use? “Stop now” is a harder stop than the slower option.
Safest stop is likely 2.0.5.1 “Stop after current file”, which also has a changed name from previous.

Stop now results in “Detected non-empty blocksets with no associated blocks!” #4037 is fixed, but wasn’t fixed in time for 2.0.5.1 Beta. I think Canary is fixed, but Canary has new code and is a risk.

2.0.5.1_beta_2020-01-18

Stop now does eventually seem to work. I think the UI gets out of sync. Like the complete % is random, it goes up and down all the time too. Thanks for the info on the other issue.

“Stop after current file” is definitely slow, because there’s a long pipeline of work-in-progress to finish.

“Stop now” is something I avoid because it’s sometimes a source of problems – but is getting better…

Throttling is still rather mysterious. My tests on 2.0.5.1 are showing upload throttle controlling both the upload and download rate, and download throttle controlling nothing, which is a change from before…
Maybe there’s also something specific to Jottacloud (which I don’t have) that makes throttling worse…

I hope to have some pull requests up soon (maybe by the end of the week) to drastically make this better, or cross my fingers, to have it work properly.

1 Like

I’m running into the same thing. Here are the specifics:

Running on Linux Mint latest version, up to date.
Duplicati - 2.0.5.103_canary_2020-02-18
Backing up to Google Drive
asynchronous-concurrent-upload-limit 1
Upload and Download limits both set to 1 KByte/s
Current upload speed 209 KB/s
Pause does not work
Stop running backup does not work
It’s currently backing up a new 6 GB file
My internet connection is virtually unusable due to upload being completely saturated

I hope someone can figure out what’s going on here.

Try setting the limit before you start the backup. That works for me.

–throttle-download ignored, --throttle-upload throttles download too #4115
is the issue I opened. YMMV but behavior seems to have changed in 2.0.4.16 Canary (whose code is newer than 2.0.4.23 Beta because 2.0.4.23 Beta was 2.0.4.5 Beta plus a warning that had to get out).

There’s a table there with results. If on a Beta before 2.0.5.1, try throttling download to throttle upload, however 2.0.5.1 has so many reliability fixes that I’d suggest 2.0.5.1 and setting upload throttle even if download also gets throttled. If you need to do a big restore, maybe just turn off all throttling and run…

I’m on 2.0.5.1_beta_2020-01-18, Windows 10, backing up to an S3 compatible bucket, and I’m experiencing big issues still.

Restarting the machine, and setting both upload and download caps during the warmup pause period does nothing, upload is set to 50KB/s (<0.5Mbps), but network is still showing Duplicati using an average of 1.5Mbps over the last hour that I’ve been testing it.

Then when I try to stop it to get my limited bandwidth back, “Stop Now” doesn’t work, it displays “Stopping after the current file” instead (and no I didn’t hit the wrong button), even after multiple attempts to use “Stop Now”. After attempting this, things get worse… the progress bar on the current backup job has stopped progressing at all, even though the overview progress bar shows a rate of a little over 100KB/s and continues to fluctuate. Sometimes the main progress rate dissappears, but after an F5 and a quick refresh it appears again. Nothing works to stop it… the only way I’ve been able to stop it was to literally kill the process in the task manager.

I’ve seen mentions above about multiple threads maybe each limited, but not respecting the overall limit as a group, but I can’t find any settings where I can set a thread limit of 1.

Would be happy to provide any other info you need to diagnose, but would also appreciate any suggestions for a temporary solution. One of my backup tasks has been failing since the beginning of April and is quite out of date now.

Thanks,
Kevin.


EDIT: I see a number of Canary versions but no mention of throttling in the release notes, but even still I’ll give duplicati-2.0.5.107_canary_2020-05-26-x64 a try and see how it goes.


EDIT 2: No joy, same issues continue to persist, including having to Kill the process to stop whatever is going on and preventing “Stop Now” from working. These are continually forcing me to rebuild databases as I corrupt them from these hard stops.

Is this exact sequence required? Is this the GUI control at top of page? Duplicati restarts don’t reset that.

What happens if you just leave it set, then backup? For me, setting upload throttling there works fine for OneDrive and Google Drive. I don’t have S3 to try, but I can’t think of any reason it would work differently.

Here are my results at 10KByte/second. One difference is Google Drive has --asynchronous-concurrent-upload-limit of 1 instead of the default 4. I’m not familiar with the code details, but I think when the parallel upload code got added, it couldn’t do each upload at stated speed, or it would exceed the specified limits.

Possibly the math isn’t right yet, however you can certainly throttle even lower to see if you can get any…

OneDrive sample dblock upload:
2020-06-03 07:16:03 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b308c3beeed464101a7b6eaba42aa8340.dblock.zip.aes (27.08 MB)
2020-06-03 08:03:27 -04 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 27.08 MB in 00:47:23.4198389, 9.75 KB/s
2020-06-03 08:03:27 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b308c3beeed464101a7b6eaba42aa8340.dblock.zip.aes (27.08 MB)

Google Drive sample dblock upload:
2020-06-03 09:33:40 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bb55d347b41d1491482c43ac7c5c8c958.dblock.zip.aes (36.82 MB)
2020-06-03 10:35:08 -04 - [Profiling-Duplicati.Library.Main.Operation.Backup.BackendUploader-UploadSpeed]: Uploaded 36.82 MB in 01:01:27.9430744, 10.22 KB/s
2020-06-03 10:35:10 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-bb55d347b41d1491482c43ac7c5c8c958.dblock.zip.aes (36.82 MB)

About → Show log → Live → Retry would give you a live log of major events like file uploads, and that should be enough to see whether uploads are moving too fast. The profiling log can do the math for you however it’s probably not worth it for a first shot. They’re huge. A small alternative would be a filtered log.

How is network usage monitored? I watched in Task Manager. Not much else is typically uploading, and Duplicati certainly wasn’t blasting. I even watched packets on Wireshark, and saw the data dribbling out destined for the only Google (or any remote) destination Duplicati had a connection ESTABLISHED with.

There have definitely been some bugs of confusion over throttling direction, but 2.0.5.107 should be fine.
throttle-download and –throttle-upload on the job are alternate ways of throttling. They can also be set in Settings in Duplicati as a global option. I don’t recall who wins if the three spots I mentioned don’t agree.

I think this is just a messaging bug of reusing a message for a different situation. The “Stop now” is not instant, but is closer to it than “Stop after current file” which means the source file (as seen in the GUI). There is a long pipeline between seeing the file and actually getting everything processed and uploaded.

Stay as close to defaults as possible now. Maybe you mean --asynchronous-concurrent-upload-limit as shown above, but also demonstrated that it’s doing well with either 1 or 4 threads, at least in my testing,

You can certainly test a small newly added backup to a local folder. Throttling should work there as well.

Upload results posted above. Download results at 100 KByte/sec:

OneDrive
2020-06-03 18:21:09 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b8572d4ae76034e8b8abfe4417ec35fa9.dblock.zip.aes (36.22 MB)
2020-06-03 18:27:26 -04 - [Profiling-Duplicati.Library.Main.BackendManager-DownloadSpeed]: Downloaded 36.22 MB in 00:06:17.0884884, 98.35 KB/s
2020-06-03 18:27:26 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b8572d4ae76034e8b8abfe4417ec35fa9.dblock.zip.aes (36.22 MB)

Google Drive
2020-06-03 14:42:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-bcb1775dde5a14746851f048ebbf987b9.dblock.zip.aes (22.77 MB)
2020-06-03 14:46:51 -04 - [Profiling-Duplicati.Library.Main.BackendManager-DownloadSpeed]: Downloaded 22.77 MB in 00:04:00.5378011, 96.92 KB/s
2020-06-03 14:46:51 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-bcb1775dde5a14746851f048ebbf987b9.dblock.zip.aes (22.77 MB)

This was on 2.0.5.106, I believe, and confirms that the 2.0.5.104 throttles control their correct direction.
There’s still a mystery about the new report, so I hope you can troubleshoot and figure out the problem.

sounded similar, so I linked here. I wrote some other evidence that upload throttling seems to often work.
There are no open issues with throttle in their title that sound like this. There’s one on throttle working when backup is to local folder, but you can use that to your advantage and create Steps to reproduce.

Hey @ts678, sorry for not getting back sooner, I hope to dig into this tonight and get back some more information. I’m on a limited (1Mbit Up) connection here and wanted to give it a chance to get through a full backup at least once, since it hasn’t completed successfully in the last 2 weeks. Alas after 2 days of not being able to do anything else on the internet due to the throtling (set at 25KB, and then dropped to 5KB) not being adhered to, I finally had to kill the process again.

I’ll try some of the suggestions you made one at a time to see what has any effect, and I’ll try to collect logs this time to provide any insight. Appreciate the help and will do my best to help sort through!

Cheers,
Kevin.

Yes, in order to try to eliminate any lingering setting changes or anything else, I was attempting to do it after reboot. I’ve since not had to as the throttle limits are in place after reboot. I’ve set to 10KB/s UP to make sure I can see Duplicati working and that it’s not so little that it could be mistaken for anything else.

I was eager to try this so I’ve set it now, and it looks promising, in my first attempt it looks like the 10KB/s limit is actually being enforced. If I remove that setting, usage spikes again as if it’s not controlled at all. It feels like a sanity check around throttle/threads isn’t working as expected. I think that’s what you alluded to here as well.

Same. I watch the overall “Performance” graph for that network device, as well as the “Process” thread for Duplicati to see what it’s using specifically. As noted above, when I set the upload thread limit to 1, it would avg out at my expected 10KB (0.1Mbps), but without the thread limit, it was averaging over 1.0 Mbps as if no limit was set at all.

Fair enough, I think that would be helped with a bit more information than the short stopping message in the status bar at top. I assume it’s more akin to “Stop after current chunk” rather than “Stop instantly” so it would be ideal to be able to see a chunk progress indicator somehow so that there is some expectation that it will indeed end. If I watch in task manager, it seems like I get 0%CPU, etc. for that process after trying to stop, so it feels like it’s frozen. More feedback would allay this.

I’m rebuilding all of my DB’s to let them get a fresh shot at it now that I’ve limited upload threads to 1, and will report back how it does once I’ve seen some progress.

Thanks again!