Using MinIO (S3) backend with Letsencrypt (wildcard) certificate

The change to allow TLS 1.2 seemed to coincide with the move away from ECC.
I have some SSL Labs scans still open in browser tabs, so here’s Jan 20 vs. 25:

I wonder if that’s what actually fixed this? At least on Linux, mono does ECC fine.
I don’t have mono on macOS, but if anyone is willing to test, that might add data.
One needs to be a little careful because there are several uses as seen in X.509 (Wikipedia)

Signature Algorithm ID

Public Key Algorithm

These sites run the csharp query fine. Here I’ll show algorithms that openssl saw.
OpenSSL: Get all certificates from a website in plain text was the base of the test.

$ (OLDIFS=$IFS; IFS=':' certificates=$(openssl s_client -connect www.cloudflare.com:443 -showcerts -tlsextdebug 2>&1 </dev/null | sed -n '/-----BEGIN/,/-----END/ {/-----BEGIN/ s/^/:/; p}'); for certificate in ${certificates#:}; do echo $certificate | openssl x509 -noout -text; done; IFS=$OLDIFS) | egrep 'Certificate:|Subject:|Algorithm'
Certificate:
        Signature Algorithm: ecdsa-with-SHA256
        Subject: C = US, ST = California, L = San Francisco, O = "Cloudflare, Inc.", CN = www.cloudflare.com
            Public Key Algorithm: id-ecPublicKey
    Signature Algorithm: ecdsa-with-SHA256
Certificate:
        Signature Algorithm: sha256WithRSAEncryption
        Subject: C = US, O = "Cloudflare, Inc.", CN = Cloudflare Inc ECC CA-3
            Public Key Algorithm: id-ecPublicKey
    Signature Algorithm: sha256WithRSAEncryption
$ (OLDIFS=$IFS; IFS=':' certificates=$(openssl s_client -connect readthedocs.org:443 -showcerts -tlsextdebug 2>&1 </dev/null | sed -n '/-----BEGIN/,/-----END/ {/-----BEGIN/ s/^/:/; p}'); for certificate in ${certificates#:}; do echo $certificate | openssl x509 -noout -text; done; IFS=$OLDIFS) | egrep 'Certificate:|Subject:|Algorithm'
Certificate:
        Signature Algorithm: ecdsa-with-SHA384
        Subject: CN = *.readthedocs.org
            Public Key Algorithm: id-ecPublicKey
    Signature Algorithm: ecdsa-with-SHA384
Certificate:
        Signature Algorithm: ecdsa-with-SHA384
        Subject: C = US, O = Let's Encrypt, CN = E1
            Public Key Algorithm: id-ecPublicKey
    Signature Algorithm: ecdsa-with-SHA384
Certificate:
        Signature Algorithm: sha256WithRSAEncryption
        Subject: C = US, O = Internet Security Research Group, CN = ISRG Root X2
            Public Key Algorithm: id-ecPublicKey
    Signature Algorithm: sha256WithRSAEncryption
Certificate:
        Signature Algorithm: sha256WithRSAEncryption
        Subject: C = US, O = Internet Security Research Group, CN = ISRG Root X1
            Public Key Algorithm: rsaEncryption
    Signature Algorithm: sha256WithRSAEncryption

Certbot issues ECDSA key signed with RSA documents how Let’s Encrypt jumps into RSA R3, e.g.

$ (OLDIFS=$IFS; IFS=':' certificates=$(openssl s_client -connect letsencrypt.org:443 -showcerts -tlsextdebug 2>&1 </dev/null | sed -n '/-----BEGIN/,/-----END/ {/-----BEGIN/ s/^/:/; p}'); for certificate in ${certificates#:}; do echo $certificate | openssl x509 -noout -text; done; IFS=$OLDIFS) | egrep 'Certificate:|Subject:|Algorithm'
Certificate:
        Signature Algorithm: sha256WithRSAEncryption
        Subject: CN = lencr.org
            Public Key Algorithm: id-ecPublicKey
    Signature Algorithm: sha256WithRSAEncryption
Certificate:
        Signature Algorithm: sha256WithRSAEncryption
        Subject: C = US, O = Let's Encrypt, CN = R3
            Public Key Algorithm: rsaEncryption
    Signature Algorithm: sha256WithRSAEncryption
Certificate:
        Signature Algorithm: sha256WithRSAEncryption
        Subject: C = US, O = Internet Security Research Group, CN = ISRG Root X1
            Public Key Algorithm: rsaEncryption
    Signature Algorithm: sha256WithRSAEncryption

If the current default certbot certificate doesn’t work in Duplicati, that’s a big deal, so I dug some more.
Fortunately Linux is looking fine. If life is different on macOS, it might even depend on macOS version.
I’m still wondering, though, if site insistence on TLS 1.3 before was what caused the handshake issue.

I did not change anything regarding TLS levels (part of the nginx config), the only thing I did was exchange the cert.

Any idea why the allowed TLS changed then? That image is directly from the SSL Labs analysis.
I’m pretty sure it was TLS 1.3 only before because I also couldn’t get in with mono on Linux. See.

Update BoringSSL fork #8004

need to update it to track the latest changes including the TLS 1.3 support.

EDIT:

Regardless of how it changed, such a configuration seems very rare. I found no official test server.
I did find test servers to show my ECC works. Maybe you can try the above and below on macOS,
using the csharp test and (if you want a closer look) the openssl s_client test that decodes heavily.

ecc256.badssl.com:443
ecc384.badssl.com:443

Copy-pasting from a web page is funny:

gerben@hermione% csharp -e ‘new System.Net.WebClient ().DownloadString (“https://ecc384.badssl.com:443”)’
gerben@hermione% which csharp
csharp () {
.DownloadString (“https://ecc384.badssl.com:443”)’
}

After fixing all the quote marks, it turns out I am even unable to reproduce the error above:

csharp -e 'new System.Net.WebClient ().DownloadString ("https://api.backblazeb2.com")'
"<!DOCTYPE html>
<html lang="en">
<head>

I was/am doing all this on an iMac running Monterey 12.6.2 which has just been updated to 12.6.3. But when I log in to another mac here still running 12.6.2 it works fine too. Somehow, I cannot even recreate the error above.

What I really do not understand is that all my 5 outside (1 inside used for testing — this system) Duplicati users were failing when I had installed the ECC wildcard cert and they started working again without any change when I replaced it with an ECC cert. The nginx config for the MinIO server did not change:

    ssl_certificate     /opt/local/etc/letsencrypt/live/rna.nl/fullchain.pem;
    ssl_certificate_key /opt/local/etc/letsencrypt/live/rna.nl/privkey.pem;
    ssl_protocols	TLSv1.2 TLSv1.3;
    ssl_ciphers		ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers	on;
    ssl_session_cache		shared:SSL:10m;

So, I now wonder, if I use a ECC key, don’t I need to add for instance ECDHE-ECDSA next to (instead of?) ECDHE-RSA ciphers? And wasn’t I just looking at a mismatch between the ssl_ciphers option of the nginx that sat before my MinIO and the cert? But why then did I originally get the csharp command failure on outside sites like BackBlaze? And why not anymore? Caching (grasping at straws here)??

SSLlabs talking from the outside may have hit other web servers than the nginx provided depending on fqdn and port, which may explain the differences in that output. Besides, for part of it (currently postfix/dovecot) there is a HAproxy running in front of it that round robin load balances over two different systems which may also result in different SSLlabs outcomes if it makes use of that.

image

and I’d hope an SSL tester knows to go to port 443 (not, e.g., 80) without a port scan or being told to.
Your HAproxy seems to add a question, and is there anything there that would care about SNI used?
If so, I’m not sure what (if anything) tester would have sent. Maybe result here is a (working) mystery.

Yes, but the nginx that sits before MinIO listens on another port and another outside fqdn. And that nginx only passes on to backend MinIOs based on a http_authorization string, it won’t answer to anything else. On a different port this nginx listens for the stuff that is www.rna.nl which is a different server config inside nginx with a different internal port and differerent webroot, etc. So, at some time we might have had that www.rna.nl had the bespoke www.rna.nl RSA cert while at the same time MinIO was behind that same nginx entry (but on a different internal and external port) that had the ECC wildcard cert. The ssl nginx options were identical, I think.

Currently, nginx doing MinIO termination (outside fqdnforminio.rna.nl:portforminio):

    ssl_certificate     /opt/local/etc/letsencrypt/live/rna.nl/fullchain.pem;
    ssl_certificate_key /opt/local/etc/letsencrypt/live/rna.nl/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers   on;
    ssl_session_cache           shared:SSL:10m;

and the same nginx instance, different ‘server’, handling outside www.rna.nl:443:

    ssl_certificate     /opt/local/etc/letsencrypt/live/rna.nl/fullchain.pem;
    ssl_certificate_key /opt/local/etc/letsencrypt/live/rna.nl/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers   on;
    ssl_session_cache           shared:SSL:10m;

These are now identical in this respect, but they weren’t always

I don’t use nginx or HAProxy or anything similar, so can’t really comment on past situation.

image

is when RSA showed up, but then there’s the load balancer question. The timing is tight too.
Earlier query that saw only TLS 1.3 was timestamped 20 seconds before this RSA cert date.
Test runs for awhile, and I don’t know whether timestamp is start or end, without experiment.

image

Having possibly disproved theory that ECC keys don’t work, I don’t know what old issue was.
I’ll leave it to you if you want to pursue further, but what’s set up now looks pretty good to me.

Agreed. I might retest with ECC later testing my nginx hypothesis.