Bug report for Duplicati -

Hi All,

After using mega.nz without any issues for longer time, Duplicati get frozen now even at testing the connection. As I have seen in older threads, many of you had this issue earlier, thus, this is unfortunately not new, but…

Whatever timeout value I set for the mega.nz backup (the shortest I can set is 1 minute - beside zero - for http-operation-timeout and http-readwrite-timeout) Duplicati doesn´t quit that backup and following backups won´t start.


Welcome to the forum @jf64

Any relevant information or tests? If tests, did you try any? Feel free to link to interesting forum topics.

might not be relevant, as they appear to relate to protocols that Duplicati runs directly. Mega is run by
MegaApiClient, which has its own timeout design. I found one issue that might be of relevance to this:

Significant number of Operation has timed out #164 and I’ll ask the same question of type of account.
Free accounts get worse treatment. Is the issue always present, just began without changes by you?

It might be helpful to get some clues on how far in the low-level protocol things are getting. What OS?

Basically, look at TCP information that seems Mega-relevant, using netstat and filtering by process ID.
If on Windows, a more convenient tool is Sysinternals Process Explorer to watch child Duplicati TCP.

Another test would be to see if Duplicati.CommandLine.BackendTool.exe can do a list or get, and if anything meaningful comes out at a Command Prompt. Export As Command-line to get a URL to use.

Thank you for the information, @ts678.

  • I´m using Duplicati on a Win10 Prof computer.
  • The Mega account I´m dealing with is a free account and by using Mega´s client I can access my data there.
    ** I was asking their support, but received no answer. Thus, is might be by intention…
    ** If I´m testing the connection, then the first test will pass, from the second on it won´t finish.
  • Just tested the LIST command of Duplicati.CommandLine.BackendTool.exe for Mega. It is like with testing the connection: the first call returns the file list correctly, from the 2nd call on it just hangs. I don´t know how long do I have to wait to be able to run a command again as “first” (get results from Mega) again.

Historically, the answer is often that they’d rather you use their client (IIRC – did you find such reports?).
Are you saying that the Mega client keeps on working, first time, and later (even after Duplicati is hung)?

If you’re on (or are willing to be on) GitHub, you could ask if MegaApiClient author has seen such things.
There weren’t a flood of complaints there, but this sounds at least a little similar to that one issue I saw.

The Process Explorer (easier after install) or netstat -ao | findstr mega or maybe findstr <PID> where the Duplicati PID is from Task Manager (the child is the one to get, or you could just test with both) might be interesting. It sounds like you probably got an ESTABLISHED connection that didn’t go as usual.

I’m pretty sure this is all encrypted, so it’s hard to see what’s being said without some special techniques such as network tracing, which I think can capture the program view of the data without the encryption…


Mega remote couldn’t login error is rclone’s version of a Mega file listing (lsf) failure with some comment.
Mega: randomly failed to login is a slightly older issue with more commentary. Not all topics are negative.

If you dare try your hand with rclone, and can get it to work, then Duplicati Rclone destination might work.

What I meant is: Mega´s Windows client is keeps working even if Duplicati´s doesn´t.

The output of netstat -ao | findstr findstr <PID> (in German, PID is 15104):

  Proto  Lokale Adresse         Remoteadresse          Status           PID
  TCP         xxx:0      ABHÖREN         15104
  TCP         xxx:49164  SCHLIESSEN_WARTEN    15104
  TCP         xxx:54097  HERGESTELLT     15104
  TCP         xxx:64069  HERGESTELLT     15104
  TCP         xxx:64329  HERGESTELLT     15104
  TCP        xxx:8200   HERGESTELLT     15104
  TCP   bt1:https              HERGESTELLT     15104
  TCP   bt1:https              HERGESTELLT     15104

Only one of the Duplicati processes generates output.
There is no “mega” (as text) in the output.

translates to PRODUCED, which appears from the port 8200 connections to usually be ESTABLISHED.
If the only other ports involved are https at a host called bt1. I don’t know if that’s a simple host name Ah, netstat /? says I can use -f for full name. Linux netstat doesn’t need it, nor does Process Explorer…

Google sees some mentions of bt1.api.mega.co.nz, Maybe that’s the full name you connected with…

The original Duplicati process just looks for any installed updates and starts the latest, so it’s pretty quiet.

You can also translate it as ESTABLISHED.

Correct, -f extends the name to bt[n].api.mega.co.nz. BTW, mega shall not be used after findstr here, because it would also show the connections of Mega´s client.

And the point is, even if the guys at Mega letting connect those Duplicati processes, they let them die. But those Duplicati processes are just there for all and don´t recognize that they are in the desert in the middle of nowhere…

Another experiment:
Even after killing Duplicati´s 2 processes and restarting Duplicati won´t help and testing the connection just hangs.

I made an experiment, which pretty well shows what happens…

After the above netstat call just put 60, which repeats that call in 60 seconds intervals (before that I started the connection check to Mega in Duplicati):

  • The local port to bt[n].api.mega.co.nz changes in irregular intervals. Is this, how Mega “keeps” the connection alive?
  • After 10 minutes the connection to Mega disappears from the output of netstat. Please remember, that is only the snapshot after every 60 seconds…
    ** In that 10 minutes (e.g. after 7 minutes) there were some output without connection to bt[n].api.mega.co.nz.
  • After another 5 minutes (i.e. total 15 minutes) Duplicati says, the connection test was OK.

I think it shall be cleared, whether and why the scheduler hangs if a backup job (e.g. the one to Mega) is hanging.

I don´t know the current implementation, whether parallel backup sessions are allowed or not.

  1. If parallel backup sessions are allowed, then - independently of status of the running session - the (next) backup due is to be executed. It would be useful to have some “max. number of allowed parallel backup sessions” setting defined. If the number of sessions reached that number, then no new backup job would be started until some of the running jobs is finished.
  2. If parallel backup sessions are not allowed, then it is fine if the next backup job won´t be started until the running one is not finished yet… Although “not finished yet” shall be a bit hardened: “not finished yet and there is active data transfer”. Thus, probably some switch would be useful to be able to automatically kill “running” jobs without data transfer after some amount of time.