Plans to update docker image?

Are there any plans on fixing the Docker image’s certificate authentication problems? As documented:

the Docker image has a bad/out-of-date certificate that causes any connections to Backblaze B2 to fail. The forum posts above have a solution to this. Is that fix ever going to get merged into the official docker image?

Reason I ask is that I’m trying to install on TrueNAS SCALE using TrueCharts, but have run into this issue. The docker container file-system is read-only, so I can’t perform the above fix. This issue has been brought up to TrueCharts, but they’re response is that the official docker image needs to be fixed. The image hasn’t been touched since 2021.

I should note that I’m referring to the “latest” docker images, not the canary images.

Edit: Looks like there’s a GH issue regarding this as well: "CERTIFICATE_VERIFY_FAILED" on all connection attempts · Issue #4721 · duplicati/duplicati · GitHub
Doesn’t look like anything was done about it though.

You are in the process of missing the current planning cycle which gathered some pull requests:

2023 release planning leading to
Preview of security and bugfix release Canary 105
which as of last post is in a build and release issue after a time of testing the combined changes.

That seems a bit non-standard. Did you set that up, did you verify it by going in and testing, etc.?
Docker Objects says final layer is read-write. docker run can set read-only, but may break things.

Last Beta was 2.0.6.3_beta_2021-06-17, so Docker isn’t being singled out. Volunteers are scarce.
Do you know Docker well enough to say if Dockerfile change could do the steps of a workaround?
It might be possible to squeak a build change in. I don’t know if anyone has tested the Docker yet.

EDIT:

I have asked at the above workaround link to find minimal steps. Alternatively, you could help find.
You would need to be able to run the image without TrueCharts. Anybody else here willing to test?

After having found what we think is Dockerfile fix, it would be good if a pre-Canary test is possible.
I’m not sure if Docker build is in the preview linked above. If so, that suggests this path is possible.

I didn’t set that up, but that seems to be the way that TrueNAS SCALE/TrueCharts/Helm set up the container. My experience is from sshing into the container itself and finding that I can’t do any of the workaround steps due to read-only filesystem, even as root.

Putting quotes around “latest” was to specify the latest tag in Docker, not as a backhanded complaint about how old it is. I get how open-source projects can be, and while I can’t contribute much (if any) in time or skills, I do donate to the project.

Unfortunately my knowledge of Docker is limited to mostly conceptual ideas and pattern-matching files. I was actually looking at my migration to TrueNAS SCALE as a “Let’s see how to use containers”. However, in the RUN section, I don’t see why running the workaround steps in there wouldn’t solve it.

I was next kind of planning on circumventing the TrueCharts part and running the Docker image straight, so I might be able to play around at that stage and see what I can test with (assuming I can get that running…).

I think minimal steps can be found in this post: Http send report errors / duplicati-monitoring - #16 by hairlesshobo

But it might also be the case that moving to a different base OS might fix the issue, since the root problem is “old” certificates.

It might also be the case that this isn’t even a problem on Canary builds, be it by a change to Duplicati’s code (doubtful) or by just an update to the image (my theory is it does work). Again, something I can test (once I figure out how access the Duplicati interface after running the container).

This one sure isn’t going to happen in the last days before release. Maybe someday.
If you read my workaround link, we now have confirmation of what was true minimal.
I don’t know why anything apt is useful, especially since Debian didn’t fix the cert…
I don’t run Docker, but I would have thought updating the OS wasn’t always advised.

Yeah, no worries there. Not expecting anyone to move mountains or anything. Though I hope maybe a little bit sooner than “someday”. Assuming we can get a working fix, I hope next release would make sense.

Might just be the case of debugging for awhile, finding a solution, but not trying to get the minimal response. Or trying to cover every possible use case in ensuring that the solution is in the correct state. A “don’t try and get too clever” type mindset.

For your original situation that needs a Beta, those have never been very frequent.
The lack of developer volunteers has been extreme (any out there?), only recently
improving enough to at least pull in some existing PRs, hopefully leading to a Beta.
On that note, let me see what @gpatel-fr thinks of a Docker build change or a test.

Working on TrueNAS SCALE, I can confirm that doing:

root@ix-duplicati-docker-ix-chart-c7f4cdf5d-gfs8g:/$ rm /usr/share/ca-certificates/mozilla/DST_Root_CA_X3.crt
root@ix-duplicati-docker-ix-chart-c7f4cdf5d-gfs8g:/$ update-ca-certificates
Updating certificates in /etc/ssl/certs...
W: /usr/share/ca-certificates/mozilla/DST_Root_CA_X3.crt not found, but listed in /etc/ca-certificates.conf.
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
Updating Mono key store
Mono Certificate Store Sync - version 6.12.0.122
Populate Mono certificate store from a concatenated list of certificates.
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.

Importing into legacy system store:
I already trust 137, your new list has 136
1 previously trusted certificates were removed.
Certificate removed: O=Digital Signature Trust Co., CN=DST Root CA X3
Import process completed.

Importing into BTLS system store:
I already trust 137, your new list has 136
1 previously trusted certificates were removed.
Certificate removed: O=Digital Signature Trust Co., CN=DST Root CA X3
Import process completed.
Done
done.

does remove the TrustCertification error.

Following this post, this can be easily confirmed from command-line via:

root@ix-duplicati-docker-ix-chart-c7f4cdf5d-gfs8g:/$ csharp -e 'new System.Net.WebClient ().DownloadString ("https://api.backblazeb2.com")'
"<!DOCTYPE html>
<html lang="en">
<head>

<script src="https://cdn.cookielaw.org/scripttemplates/otSDKStub.js" type="text/javascript" charset="UTF-8" data-domain-script="c2b991fa-af6b-41eb-a5e8-4d9878afe4d8"></script>
<style type="text/css">
[.........]

The above is old news, but I’ve confirmed this is the case for both the beta/latest and canary docker images. I’ll now try and see if I can make changes to the Dockerfile to automatically fix the issue in build.

I am in no way a Docker expert, what I know is that when I manage to get a working canary install in the hands of the project owner, the build installer script should build and push a Docker image to Docker hub (it’s intended as such at least).

I have downloaded the latest mono image (6-slim) and ‘openssl s_client -connect api.backblazeb2.com:443’ seems to work all right inside the container, so it should be good (If I manage to get the installer working before Buster going obsolete, or ChatGptX achieving the singularity, whichever comes first)

I’m even less of a SSL/security expert than I am a Docker expert, but I think the “root” issue is that, yes openssl works, but mono uses BoringSSL.

Can you try csharp -e 'new System.Net.WebClient ().DownloadString ("https://api.backblazeb2.com")' in your test? Seems like that’s the way of testing how Duplicati interfaces with backblaze.

It depends on the version. Buster has OpenSSL 1.1.1n according to repology. Stretch has 1.0.2u.

Old Let’s Encrypt Root Certificate Expiration and OpenSSL 1.0.2 (OpenSSL Blog explains issue)

You can directly check the openssl version (which might explain why it works) and check certificate.
The suggested test with csharp would be best. I think that command was in Docker, at one point…

Success! Applying the following change to context/Dockerfile:

14c14,16
-     cert-sync /etc/ssl/certs/ca-certificates.crt
---
+     cert-sync /etc/ssl/certs/ca-certificates.crt && \
+     rm /usr/share/ca-certificates/mozilla/DST_Root_CA_X3.crt && \
+     update-ca-certificates

has backblaze connecting successfully in TrueNAS SCALE (and presumably elsewhere). That was the only change needed as well.

Edit: Note I did test a build of the Docker container without any modifications to verify the behavior before implementing the above fix.

Yeah, I was able to run the csharp command inside the Docker without having to do anything.

Yes, that’s it. Only ChatGpt could explain why deleting an obsolete certificate can make a connection succeed but it’s as you say.

The theory is that Debian Buster feels no pressing need to fix the cert because it doesn’t hurt them.
The question is whether Mono Project (who got hurt) has slid a cert fix into their image. It’d be nice.
OTOH ever since Microsoft bought them, their attention has been on .NET. Sad, but not surprising.

If it turns out we have to do our own two RUN lines, maybe do them right after we take their image?
Our Dockerfile seems to use different sections for different things, which is why I’m suggesting that.
Lines that begin with a # are comments, if it seems reasonable to put something in on the change…

This post is talking about duplicati-monitoring.com, but they determined that the certificate is in the chain of certification somewhere. I think we could go through the same process if we really felt like it for backblaze.

I think that’d be wise. Otherwise those who don’t know would be confused AF as to why we’re deleting this one seemingly random certificate.

The OpenSSL blog explains how the failure works. Let’s Encrypt also has a nice summary of the issue.

Production Chain Changes explains and links to how they saved old Android by chaining to an expired certificate – because Android doesn’t mind and it actually has some standards support behind the idea.

There are two versions of ISRG Root X1, but its default uses the Android-supporting chain-to- expired.

There’s an alternative chain with self-signed ISRG Root X1 that relies on that being installed on the OS. While waiting for everyone to catch up, they save Android and break mono or OpenSSL 1.0.2 which was initial BoringSSL code source, but not the current one. Mono won’t upgrade, so still breaks on old cert…

Removing the old cert forces the code to realize it actually has the cert in its trust store, so it then works.

OpenSSL explains that OpenSSL 1.0.2 “always prefers the untrusted chain”, following it to expired end.

EDIT to add an example as shown by an openssl:

Certificate chain
 0 s:CN = backblazeb2.com
   i:C = US, O = Let's Encrypt, CN = R3
 1 s:C = US, O = Let's Encrypt, CN = R3
   i:C = US, O = Internet Security Research Group, CN = ISRG Root X1
 2 s:C = US, O = Internet Security Research Group, CN = ISRG Root X1
   i:O = Digital Signature Trust Co., CN = DST Root CA X3

EDIT 2:

Google BoringSSL commit also explains the multiple paths, and how one can prefer the trusted path.
Enable X509_V_FLAG_TRUSTED_FIRST flag in BoringSSL #21233 is Mono team declining to do it.
https://valid-isrgrootx1.letsencrypt.org/ is a test site for the self-signed ISRG Root X1, that looks like:

Certificate chain
 0 s:CN = valid-isrgrootx1.letsencrypt.org
   i:C = US, O = Let's Encrypt, CN = R3
 1 s:C = US, O = Let's Encrypt, CN = R3
   i:C = US, O = Internet Security Research Group, CN = ISRG Root X1

Sometimes people call this one the short chain. After the cross-signed cert expires in 2024, maybe this will be what’s used. If so, old Android stops working, and mono/Duplicati may start working once again.

1 Like