Thanks for reporting this. We have upgraded the parsing code to be more robust in case of odd characters, but this caught the . character and encoded it as well, so a hostname such as cloud.test.com becomes cloud%2Etest%2Ecom which is not correct.
I have a fix ready and will make a new release soon.
Yes. The original API was created when there was no option to do simultaneous transfers, so it reported one speed from the single backend instance. After concurrent uploads were added, the code was never updated and just reported the speed of a random instance (not even the same instance).
The canary updates the API and now reports all active transfers and an aggregate transfer speed.
The plan is to make a nice “status” page that will show everything that is going in in greater detail, so each transfer is visible, the file enumeration, hashing, compression, etc is showing.
The logic for ngax, which was carried over in ngclient, is to only show the transfer when that is limiting the progress. As long as files are being processed (the BackendIsBlocking flag is false) it will not show the transfer speed, but instead show the progress in terms of files.
The logic for this is that, if the backend is fast enough, the important part is the file processing speed. Once the backend cannot keep up, it becomes the important part.
Correct means RFC 3986.
After that it becomes messy because not all URL libraries and parsers do the same thing.
Duplicati is especially odd, because I initially wanted to make it easier to use on the commandline, which meant I built a custom parser to support something like:
webdav://user:pass@example.com
The parser supports some pretty odd things to allow a @ in the username and password. Also, support for just passing in a local file path makes the parsing even more convoluted. Despite all the good intentions of this parser, it also has a number of quirks, and ultimately the downside of not using a standard URL library outweighs the benefits (especially given the number of CLI vs UI users).
The way it works now is that it accepts the legacy odd input, but “prefers” the correct encoded versions. This is being affected a bit by the secret manager that does a encode/decode cycle on each URL (even if it is not used), so the URL you pass in to Duplicati gets converted in the escaped strict RFC 3986 format before being passed to the backend.
Going forward, the plan is add a warning if the input URL does not conform to RFC 3986. For the current state, the logic is to simply accepts the relaxed form and then use the strict form.
There may be some corners that we need to address, such as the one reported by @DCinME here, but ultimately, using standard compliant URLs will remove a large class of error sources, and allow other tools to work better with Duplicati.