Ok, let’s dig bit more details, because it’s summer and I’m not too busy.
During backup, I’ve configured Duplicati to use a single connection:
2023-07-14T20:41:18.162Z << [FTP Session # A.B.C.D] 220-FileZilla Server 1.6.7
Client logs in as normal, and enters command:
And after that, the session isn’t ever used again. The restriction of single connection isn’t followed, because rest of transfers create a new connection(s). Wouldn’t it make then sense to close this connection immediately after the MLSD results? Now it’s just left lingering?
Rest of STOR commands for iblock / dblock use a single session including the dindex. Then it’s closed. Then verify seems to use a new connection which is fine.
I’m just wondering why the first MLSD (ls) connection is left open, if it’s not reused. It would make sense to close it or reuse it.
With my configuration, where I’ve got many very long running backup sets. It was really easy to spot that there are lingering unused connections open to the server. Due to firewall configuration, I’ve got limited number of ports reserved for FTPS, yet this isn’t a real problem, because I’ve got one hour timeout configured for idle connections. Yet, now when I’ve realized that the connection is really useless, I could shorten the timeout to 15 minutes. It still should be short enough not to terminate the useful upload connections, where I do prefer keepalive connection instead of full reconnect.
For details, sure it’s the AFTP module, the FTPS module itself is way too broken with bad TLS handling.
When using FTPS - Ref: 2023-07-19T10:30:31.400Z [FTP Session # A.B.C.D test] GnuTLS error -110 in gnutls_record_recv: The TLS connection was non-properly terminated.
Yet this doesn’t actually terminate the connection, but it still gives an error. I’ve had this problem with Windows FTPS earlier, and even fixed it (ie, broken exception handling) for Python FTPS to get it working. Ref: Sami Lehtinen - Python 3.2 MS FTPS/SSL/TLS lockup & fix
But to be honest. This really doesn’t matter, really absolutely meaningless issue, compared to the data corruption issues. It’s technically cosmetical error that two connections are being used, when one would be sufficient. I admit it, tidy people are annyoing, messy world works too.
Sure the lingering connection is actually closed when Duplicati finally exists, now when I tested with smaller sets. Most of my backups runs for several hours and because I have only 60 minutes idle timeout, I saw lots of timeout errors in logs. Which I assumed to be caused by left over sockets. But it seems that assumption wasn’t completely accurate. Now when I tested with minimal test setup, I found out that it’s just unnecessary open connection “while” the backup runs.
Yet, I’m not completely sure, if that conclusion is true either. Because I’m quite sure I’ve seen those connections lingering on the server even if the backup dlist is present after clean restart and verify won’t take that long. So there might be some situation which does cause the session to be left over even after the backup has completed. - From the session IDs I see that it’s the first session I’m talking about.