I think it would be nice to remember the throttle settings for the next time the throttling is activated.
Every time I set the throttle I have to enter the 75 and change the mb/sec to kb/sec. (I do that every morning because over the day I need the upload for remote working bandwidth).
Would be nice to remember. I think it is realistic that a user “reused” the last throttle settings.
I’m not sure if the Duplicati GUI uses cookies or not, but do you think it would be adequate to just store the last used value in a cookie and, if found, use that the next time a throttle is set?
Benefits include:
“easy” to implement (GUI only change, no JSON / database updates needed)
OK. I can’t make any promises but consider the feature request made. (Note that I moved this to the #features category now that it’s an official request.)
Alternatively perhaps consider QoS. I tag all backup traffic with a certain QoS value. Then my router puts that traffic in a lower priority bucket. It can consume all my upload traffic if available, but as soon as I do something else it takes priority over the backup traffic.
This was my suggestion in another thread. How did you identify “backup traffic” so it could be tagged? I’m familiar with how to tag things on my router, but I’m at bit at a loss on how to identify backup traffic. I’m using Backblaze B2, so it’s outgoing https traffic, and coming from 3 or so devices on my network, but two of them could be on of of a couple of different VLANs.
Is there a way to classify just the backup traffic that doesn’t involve a specific set of source and destination IPs?
I found a possibility if you have only one instance of Duplicati: NetLimiter 4 for Windows.
Unfortunately not freeware but it fits for me! The Pro version costs about 30$. Maybe the Lite can do also the limitation then 20$.
You have many possibilities. You can limit the internet speed of the process (duplicati.gui.tryicon.exe). You can set this limits “by hand” manual or with rules => eg. enable the limit 23:00 and disable it 06:00 or you can limit it by target (eg. 85.214.4.102 for the Cloud of Deutsche Telecom) or you can set priorities (I have not tried this yet) or you can set limits after reaching up/down quotas (also not tried this).
Here an example for a limit set just for duplicati internet traffic:
And in the process view you see the upload limit 80kb/sec
And here a limit for a specific cloud server is set, no matter if the traffic is from Duplicati or a different process:
If you have more then one PC with Duplicati a possible solution would be to redirect the traffic of all Duplicati-PCs thru a proxy and then use a proxy which can throttle different users. Eg. CCProxy (costs about 4$/year).
Thanks for that suggestion, it looks really useful!
Not that I’m offering anything like this, but would an advanced parameter like this do what you want? For example:
--throttle-schedule=80KB@09:00-17:00 would limit bandwidth to 80KB from 9AM to 5PM otherwise run at current global (or manually throttled) speed
--throttle-schedule=80KB@09:00-17:00&128KB@17:00-00:00 would limit bandwidth to 80KB from 9AM to 5PM, 128KB from 5PM to midnight, then whatever the global (or manually throttled) setting is from midnight to 9AM
Overlapping times would likely use the FIRST time and speed specified. Not that times are likely approximate such that a speed change won’t start until the next upload begins. In other words, if it takes 15 minutes to upload a file started at 8:59 AM then the new speed may not kick in until 9:14 AM.
Note that in this concept there is no day-of-week adjustment possible.
Edit: It looks like there’s some GitHub requests for this type of feature as well. Maybe a bounty could be put together…
My initial question was much easier! I just asked if the GUI can remember the last value. So your suggestion would be a possibility but I this it much more complex than “a little gui improvement” (remember in a cookie or something like that).
Absolutely - adding a cookie should be easier than adding an advanced parameter. But I know a number of people have asked for throttle scheduling as well.
Is there a way to set the global throttling via the command line using --throttle-upload? I don’t want to actually run the backup command to start a new set. I just want to make adjustments globally.
Yes, you can use --throttle-upload in command lines and it will be applied where it’s applicable (though you might get an “unrecognized parameter” if it’s not applicable).
You can also use the global main menu “Settings” link to set --throttle-upload there and it will be applied to all jobs (unless the job has it’s own specific setting which will then take precedence).
Just in case you’re asking something I’m not understanding, you can also set a lot of parameters using environment variables as described in this topic:
I was hoping there would be something from the command line on windows for me to script pausing the entire server and setting the throttle limits. Not just on the individual backup jobs. Something like this:
I may still not be following you - are you wanting to let jobs run at “full speed” then use some other scheduler to to be able to shrink and grow the throttle through command line calls?
Now that I understand what you want to do, I can definitely say…I have no idea.
I know other users have used QoS (Quality of Service) settings on their routers / gateways (Throttle Bandwidth After Backup Has Started?) or third party apps (see NetLimiter above) that throttle specific traffic or network connections, but I’m betting that @Pectojin will have a thought or two on possibly doing what you’re describing.