Duplicati website points to github instead of msi

All download links on http://www.duplicati.com/download point to Releases · duplicati/duplicati · GitHub .

This issue has occurred once before. See here: https://forum.duplicati.com/t/download-links-point-all-to-github-site/7076


Hmm, I just checked and everything seemed to link to actual install bundles except, of course, for the two under “for developers and system administrators”. Maybe someone fixed it already? Or maybe it’s browser dependent? I’m using Vivaldi.

We tried contacting the server admin. Maybe we succeeded? Earlier, site updates.duplicati.com was returning a 502 error as described in the old forum post. I didn’t keep a screenshot, but another view was:

Duplicati.Library.AutoUpdater.exe check
Error detected: System.Net.WebException: The remote server returned an error: (502) Bad Gateway.
   at System.Net.WebClient.DownloadFile(Uri address, String fileName)
   at Duplicati.Library.AutoUpdater.UpdaterManager.CheckForUpdate(ReleaseType channel)
No updates found
(but now)
Duplicati.Library.AutoUpdater.exe check
No updates found

Interestingly, the alt.updates.duplicati.com server was up before but the download page ignores it.
My JavaScript isn’t good enough to devise a fix (any web developers around?) but another complication would be that https://alt.updates.duplicati.com/beta/latest-installers.js returns links for updates.duplicati.com which was the failed server, so an actual download attempt might have failed.

Interesting. The downloads page appears to be working now, correctly linking to the MSI files.

Hi, the problem still appears for me this morning. I checked on Firefox and Edge.

Behavior keeps changing. Maybe something is in a restart loop, maybe by itself or maybe with some help.
updates.duplicati.com is still not responding right, but sometimes no response and sometimes a 502:

Duplicati.Library.AutoUpdater.exe check
Error detected: System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
   at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
   at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)
   --- End of inner exception stack trace ---
   at System.Net.WebClient.DownloadFile(Uri address, String fileName)
   at Duplicati.Library.AutoUpdater.UpdaterManager.CheckForUpdate(ReleaseType channel)
No updates found
Duplicati.Library.AutoUpdater.exe check
Error detected: System.Net.WebException: The remote server returned an error: (502) Bad Gateway.
   at System.Net.WebClient.DownloadFile(Uri address, String fileName)
   at Duplicati.Library.AutoUpdater.UpdaterManager.CheckForUpdate(ReleaseType channel)
No updates found

then later a third try went back to the first no-response case. Here’s the view of it using a different tool:

C:\>curl https://updates.duplicati.com/beta/latest-installers.js
curl: (28) SSL/TLS connection timeout

Wireshark shows it doing a TCP level ACK to TLS Client Hello, and not continuing even getting TLS up.
Will try again to contact server admin.

Meanwhile, typical user would want to skip over the GitHub Canary releases and download Beta.


Windows users should get the .msi version, and likely the x64.msi unless using a rare 32 bit Windows.

I’m not sure what alt.updates.duplicati.com is doing but it doesn’t seem to be a complete mirror of updates.duplicati.com.

The “beta/latest-installers.js” script exists on both but the “js” folder seems to be absent from alt.updates which without access to the “js/download.js” script the downloads page won’t be modified to include the proper (non-git) links. Maybe it’s as simple as the “js” folder is missing on alt.

If you look at the page as you linked to it above here shows an empty <div id="current-os"></div> (lines 16-18), that empty div gets populated by the “download.js” script, if it can access it.

At the end of the day you could just put the links directly into the page and forgo the script altogether. That script is just a JSON file that populates the div on $(document).ready, I’m sure there is a reason it’s not coded directly into the page but for now it could keep things working.

Duplicati website in GitHub changes rarely. Download page hasn’t changed since 2017. I assume that’s possible only because the web server can (when plan works…) get installer info from the update server.

Probably yes, but

I don’t know how a release does updates on the various servers, but changing (in GitHub) the web site seemingly doesn’t happen now, and maybe that simplifies tasks. As a non-web-developer, I don’t know whether some other web page updater could/should be built to update the web server based on some sophisticated and robust method that couldn’t easily be expressed in JavaScript on the download page.

I’m not sure what sort of failover logic one can put in JavaScript, but a cron job script or something can monitor both update servers. If one acts up, notify the server admin, then use the other for installer info.

You have some good observations about asymmetry, but there’s also mine on when asymmetry is best because having the alt server hand out links to a broken primary (I verified download failure) won’t work.

Regardless, this is out of our hands. I did another notify attempt, and there’s another channel if needed.
People tend to have day jobs, and I don’t know if this is an emergency (as OAuth server outage will be).

For whatever reason, the update server seems the most troublesome one. Any NGINX experts around?

I did get a warning that my monitor could not access the site. It appears to be running now, and cURL can read the file. I did not change anything, but it could be some temporary outage at DigitalOcean which hosts the update servers.

Yes, there are three mirrors, two serving under updates.duplicati.com and one serving under alt.updates.duplicati.com. The updater will try the updates.duplicati.com first (either mirror) and fall back to the alt. one.

The reason is that the github pages stuff that hosts the main website is a pain to self-host, which was required back then because Github did not support custom domains. So the website runs a self-hosted version of Jekyll that renders the page from Github.

To avoid commits to that page for each deploy, it loads in the lastest.json file so the page can remain static, and always serve the latest version.

Yes, I have that in place. It is not great, but I do get a notice once a day if it stops responding.

The alt-server is used only for the auto-update feature, it is not used to get the installer information. I guess that would be nice, but perhaps a better solution would be a redirect to the Github releases page.


Thanks for the explanation, makes sense.

If you’re talking about linking users directly to https://github.com/duplicati/duplicati/releases, that I’m not so sure about. The current beta is the sixth item down and unless you’re really reading people won’t find it, it doesn’t stand out at all, they’ll just end up picking a canary version… sure, that could be a good thing for testing but probably not optimal for user experiences. I guess it’s better than a straight up 502 error but it’s not pretty. If you’re talking about a link directly to the current beta then it’s much less of an issue and most users should be able to find what they need from there.

When the updates server(s) are working it seems to work fine and I’m a big fan of if it ain’t broke don’t fix it, re: the JSON delivery method from Git.

I agree, it was likely just a network issue or “something on a Friday” at DO, which you probably can’t do much about but moving forward, mirroring a bit more data/functions to alt.updates. may not be a bad idea if it’s an option. I’m presuming both update. servers are both internal to DO and if the network was the issue it wouldn’t matter how many servers you have, 2 or 200 none of them would be available in that situation.

Now I have to ask, why did the autoupdater checks (above by @ts678) fail out when the updates. server became unavailable? If that’s one of the alt.updates. server jobs (maybe it’s only one) then shouldn’t the alt.updates server have been responding to those checks?

Are you sure it didn’t? Even after the error message, it said No updates found.
Although one might worry it was just confused, code looks like it loops over both.

(I didn’t study the code much beyond thinking it looked like it looped and could check both of the servers)