Release: 2.1.0.124 (Canary) 2025-07-11

2.1.0.124_canary_2025-07-11

This release is a canary release intended to be used for testing.

Changes in this versions

This is primarily a bugfix release for some minor issues that were reported on 2.1.0.123.

Detailed list of changes:

  • Fixed wrong SQLite environment variable name
  • Don’t show stack if stopping a backup due to missing sources
  • Update logic for probing for an access method in ServerUtil
  • Fixed ngax not supporting --use-ssl
  • Use template icons on macOS
  • Fixed an error when loading log data
  • Avoid floods of notifications
  • Corrected encode/decode of URLs
  • Fixed issue with recreated index files not reporting deleted blocks

ngclient changes

  • Fixed SMB editor handling paths incorrectly
  • Fixed source editor not inserting empty paths
  • Fixed an issue with selecting paths with a space
  • Fixed some dialogs not updating UI after clicking cancelable
  • Removed an extra leading slash for Windows destination paths
  • Recalculate file existing on database move
  • Show correct date and format in logs
2 Likes

I have upgraded all my machines at the weekend and so far nothing to report, all looks fine.

2 Likes

I updated to 2.1.0.124, and afterwards my backup job that has a WebDAV backend no longer works. It seems to be having trouble parsing the URL.

If I use the Test Destination button, it says this:

If I ask it to show me the Target URL, it looks okay:
image

It doesn’t seem to matter if I am in the old UI or the new:

If I just ignore the failed test and try to run the backup, it does not complete.

Edit:
If it’s of any help, it will at least attempt to test the destination if the server field doesn’t have any dots in it. For example cloud.example.com will not even attempt, but just entering “cloud” will (but will still fail, for obvious reasons):

For years the status bar upload speed (right hand side as “at <speed>”) was incorrect, getting user questions and awkward attempts at answers. Did any Canary backend work improve the situation?

If not, maybe the next comment is a feature (not a bug), but the speed seems to rarely appear. Awhile ago it seemed to show up when an upload finished and a speed became available, but recently I see uploads but no upload speed. I just set up a test using local folder destination with a 1 MB/s upload throttle and no parallel uploads, and watched in Explorer as files showed up but a speed didn’t. I also had a clock up, to watch any time speed showed up. It was at least 4 seconds starting at 19:12:00 UTC. Wireshark shows:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 14 Jul 2025 19:11:56 GMT
Server: Kestrel
Transfer-Encoding: chunked


0.000000s
{"BackupID":"5","TaskID":59,"BackendAction":"Put","BackendPath":"duplicati-b4d33062e8cc447dcb3270cdffd578f07.dblock.zip","BackendFileSize":1048981,"BackendFileProgress":0,"BackendSpeed":-1,"BackendIsBlocking":false,"CurrentFilename":"C:\\Users\\Me\\Downloads\\Binary\\duplicati-2.1.0.124_canary_2025-07-11-win-x64-gui.zip","CurrentFilesize":83392132,"CurrentFileoffset":56623104,"CurrentFilecomplete":false,"Phase":"Backup_ProcessingFiles","OverallProgress":0,"ProcessedFileCount":0,"ProcessedFileSize":0,"TotalFileCount":1,"TotalFileSize":83392132,"StillCounting":false,"ActiveTransfers":[{"BackendAction":"Put","BackendPath":"duplicati-b4d33062e8cc447dcb3270cdffd578f07.dblock.zip","BackendFileSize":1048981,"BackendFileProgress":0,"BackendSpeed":-1,"BackendIsBlocking":false}]}

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 14 Jul 2025 19:11:58 GMT
Server: Kestrel
Transfer-Encoding: chunked


0.000000s
{"BackupID":"5","TaskID":59,"BackendAction":"Put","BackendPath":"duplicati-b730541a2211b48adaf7b4325f84707c0.dblock.zip","BackendFileSize":1048981,"BackendFileProgress":983040,"BackendSpeed":720896,"BackendIsBlocking":false,"CurrentFilename":"C:\\Users\\Me\\Downloads\\Binary\\duplicati-2.1.0.124_canary_2025-07-11-win-x64-gui.zip","CurrentFilesize":83392132,"CurrentFileoffset":61865984,"CurrentFilecomplete":false,"Phase":"Backup_ProcessingFiles","OverallProgress":0,"ProcessedFileCount":0,"ProcessedFileSize":0,"TotalFileCount":1,"TotalFileSize":83392132,"StillCounting":false,"ActiveTransfers":[{"BackendAction":"Put","BackendPath":"duplicati-b730541a2211b48adaf7b4325f84707c0.dblock.zip","BackendFileSize":1048981,"BackendFileProgress":983040,"BackendSpeed":720896,"BackendIsBlocking":false}]}

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 14 Jul 2025 19:12:00 GMT
Server: Kestrel
Transfer-Encoding: chunked


0.000000s
{"BackupID":"5","TaskID":59,"BackendAction":"Put","BackendPath":"duplicati-bb8226d99e1e14dfdb7f04412ac7a07a6.dblock.zip","BackendFileSize":1048981,"BackendFileProgress":851968,"BackendSpeed":720896,"BackendIsBlocking":false,"CurrentFilename":"C:\\Users\\Me\\Downloads\\Binary\\duplicati-2.1.0.124_canary_2025-07-11-win-x64-gui.zip","CurrentFilesize":83392132,"CurrentFileoffset":61865984,"CurrentFilecomplete":false,"Phase":"Backup_ProcessingFiles","OverallProgress":0,"ProcessedFileCount":0,"ProcessedFileSize":0,"TotalFileCount":1,"TotalFileSize":83392132,"StillCounting":false,"ActiveTransfers":[{"BackendAction":"Put","BackendPath":"duplicati-bb8226d99e1e14dfdb7f04412ac7a07a6.dblock.zip","BackendFileSize":1048981,"BackendFileProgress":851968,"BackendSpeed":720896,"BackendIsBlocking":false}]}

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 14 Jul 2025 19:12:02 GMT
Server: Kestrel
Transfer-Encoding: chunked


0.000000s
{"BackupID":"5","TaskID":59,"BackendAction":"Get","BackendPath":null,"BackendFileSize":0,"BackendFileProgress":0,"BackendSpeed":-1,"BackendIsBlocking":false,"CurrentFilename":"C:\\Users\\Me\\Downloads\\Binary\\duplicati-2.1.0.124_canary_2025-07-11-win-x64-gui.zip","CurrentFilesize":83392132,"CurrentFileoffset":61865984,"CurrentFilecomplete":false,"Phase":"Backup_ProcessingFiles","OverallProgress":0,"ProcessedFileCount":0,"ProcessedFileSize":0,"TotalFileCount":1,"TotalFileSize":83392132,"StillCounting":false,"ActiveTransfers":[]}
2.019305s

Problem (if it’s really a problem…) can be seen in both old ngax and new ngclient GUI.

I think I also observe this on my Backblaze B2 backup, which is speed limited naturally.

In some good news, with the accumulated fixes, I finally have a backup with the proper number of dindex files for dblock files, and content is clean even after a large compact, meaning recreate no longer has to download a few (was 7 awhile, then it dropped to 3) dblock files due to bad dindex files. A full-remote-verification also is nice and clean now. History of these fixes came at different times, so I’m just saying the area looks nice now, provided user does the manual work involving things like test and repair. How to let people know what they must do for best results in recreate and direct restore from files?

Storage size didn’t drop as much as hoped, maybe because custom retention seems to leave the partial backups alone. There might have been a reason, but I’d have to check.

EDIT 1:

What is correct? If I give ngclient a space, I’m still seeing %20 in ngclient Edit target URL and in the database, and in ngax. Current test is with SSH to avoid some issues with file.

EDIT 2:

URL gets URL-encoded during Submit and Import #324
has some history, including the way it used to be done, and concern about changing that.

Thanks for reporting this. We have upgraded the parsing code to be more robust in case of odd characters, but this caught the . character and encoded it as well, so a hostname such as cloud.test.com becomes cloud%2Etest%2Ecom which is not correct.

I have a fix ready and will make a new release soon.

Yes. The original API was created when there was no option to do simultaneous transfers, so it reported one speed from the single backend instance. After concurrent uploads were added, the code was never updated and just reported the speed of a random instance (not even the same instance).

The canary updates the API and now reports all active transfers and an aggregate transfer speed.

The plan is to make a nice “status” page that will show everything that is going in in greater detail, so each transfer is visible, the file enumeration, hashing, compression, etc is showing.

The logic for ngax, which was carried over in ngclient, is to only show the transfer when that is limiting the progress. As long as files are being processed (the BackendIsBlocking flag is false) it will not show the transfer speed, but instead show the progress in terms of files.

The logic for this is that, if the backend is fast enough, the important part is the file processing speed. Once the backend cannot keep up, it becomes the important part.

Correct means RFC 3986.

After that it becomes messy because not all URL libraries and parsers do the same thing.

Duplicati is especially odd, because I initially wanted to make it easier to use on the commandline, which meant I built a custom parser to support something like:

webdav://user:pass@example.com

The parser supports some pretty odd things to allow a @ in the username and password. Also, support for just passing in a local file path makes the parsing even more convoluted. Despite all the good intentions of this parser, it also has a number of quirks, and ultimately the downside of not using a standard URL library outweighs the benefits (especially given the number of CLI vs UI users).

The way it works now is that it accepts the legacy odd input, but “prefers” the correct encoded versions. This is being affected a bit by the secret manager that does a encode/decode cycle on each URL (even if it is not used), so the URL you pass in to Duplicati gets converted in the escaped strict RFC 3986 format before being passed to the backend.

Going forward, the plan is add a warning if the input URL does not conform to RFC 3986. For the current state, the logic is to simply accepts the relaxed form and then use the strict form.

There may be some corners that we need to address, such as the one reported by @DCinME here, but ultimately, using standard compliant URLs will remove a large class of error sources, and allow other tools to work better with Duplicati.

Except it rarely reports. That was the question, and reply below says it usually won’t?

Maybe the nice “status” page will do better, meanwhile, all we’ve got is the status bar.

This seems to not be what my example showed. You can see the two middle reports:

"BackendSpeed":720896,"BackendIsBlocking":false

It acts like when there is a BackendSpeed that isn’t -1, it gets shown. I can try again.

I did do more testing on an upload slowed at the router. This does fall into the pattern:

"BackendSpeed":-1,"BackendIsBlocking":true

and presumably no speed is shown because negative 1 is saying there’s none known.
End result is a backup that is probably very backend-limited (800 Kbps) notes nothing.

So for now the old non-RFC URLs should silently work. Good.
New URLs on old Duplicati might go worse, but is less critical.

What if anything spares the user from having to hand-encode?

Presumably the broken-out editor fields are in human version?

Is URL edit supposed to use URL-RFC-user-challenging form?

Are ngclient and ngax both following new plan and compatible?

Once plan is known, I can better say whether or not it’s working.
Ideally, anything that user needs to know should get in the docs.

It should always “report”, as in the API returns the speed.
The UI just does not always show it.

Is this what you see as well?

That looks like the transfer has not started yet?

There are both the original/legacy “single transfer” fields: BackendSpeed and the new ActiveTransfers field. Then new field shows all active transfers and their speeds, where the original/legacy fields only shows a single transfer.

Do you see multiple transfers, but just one of them being blocked?

Yes, there are some issues going back, you may have a destination url that is not correctly handled, because the older versions did not correctly handle a correctly encoded url.

For the UI, the editor is doing that work. For the CLI, the burden is on the user. You can use any URL encoding tool to encode the url. However, the parser does not require the full encoding that the UI does, it is just less error prone in the UI. For example, a url such as file:///C:\test is a valid URL and will parse correctly, but the UI will instead create something like file:///C%3A%5Ctest.

Yes. No encoding needs to be done when using the UI.

URL edit is supposed to parse the input and produce a strictly encoded output. Generally “flexible in input, strict in output”. The user is not expected to know about the URL format when using the UI. Users pasting in URLs should know how to encode the URL or at least check the results.

Yes and no.

Yes, both work and produce equivalent results.
No, because ngax still generates the relaxed URLs.

True. The relaxed format is really hard to document because it is “kind-of-url”. The new format is easy to document because it follows the RFC.

3 posts were split to a new topic: Throttle speed for B2 looks inconsistent with spikes

What about ngax decoding? GUI “Folder path” created in ngclient looks OK, ngax says:

/C%3A%5CDuplicati%20Backups%5C

ngax showing three-dot “Copy Destination URL to Clipboard” URL encoded is a bit awkward, but maybe it follows URL “strict in output” and it’s out-of-the-way anyway, and an advanced feature.

I have higher human-readability expectations for broken-out-editor fields.

I have opted not to work on the ngax editor field here. The problem occurs if you save the backup with ngclient and then go to ngax. The correct fix would be to URL decode in ngax, but I prefer not to edit that part as it is likely to introduce subtle bugs, and the issue has no functional impact.