Mega.nz stopped working

I have a scheduled backup to Mega.nz but from few days there seems to be a problem. When the job start running it stuck on ‘completing backup’ and if i check my upload speed it doesn’t go up than 1 kb/s. My guess is that they probably changed something in their api, but i am not sure. Anyone else using Mega.nz for backup, and do you have that problem?

I have the same problem.

Jul 21, 2018 11:10 AM: Starting - Async backend wait
Jul 21, 2018 11:10 AM: Uploading a new fileset took 00:00:00.281
Jul 21, 2018 11:10 AM: CommitUpdateRemoteVolume took 00:00:00.015
Jul 21, 2018 11:10 AM: Starting - CommitUpdateRemoteVolume
Jul 21, 2018 11:10 AM: CommitUpdateRemoteVolume took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - CommitUpdateRemoteVolume
Jul 21, 2018 11:10 AM: Starting - Uploading a new fileset
Jul 21, 2018 11:10 AM: VerifyConsistency took 00:00:00.078
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" WHERE "BlocksetID" != ? AND "BlocksetID" != ? AND NOT "BlocksetID" IN (SELECT "BlocksetID" FROM "BlocksetEntry") took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" WHERE "BlocksetID" != ? AND "BlocksetID" != ? AND NOT "BlocksetID" IN (SELECT "BlocksetID" FROM "BlocksetEntry")
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 00:00:00.015
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual")
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT Count(*) FROM (SELECT DISTINCT "BlocksetID", "Index" FROM "BlocklistHash") took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT Count(*) FROM (SELECT DISTINCT "BlocksetID", "Index" FROM "BlocklistHash")
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT Count(*) FROM "BlocklistHash" took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT Count(*) FROM "BlocklistHash"
Jul 21, 2018 11:10 AM: ExecuteReader: SELECT "CalcLen", "Length", "A"."BlocksetID", "File"."Path" FROM ( SELECT "A"."ID" AS "BlocksetID", IFNULL("B"."CalcLen", 0) AS "CalcLen", "A"."Length" FROM "Blockset" A LEFT OUTER JOIN ( SELECT "BlocksetEntry"."BlocksetID", SUM("Block"."Size") AS "CalcLen" FROM "BlocksetEntry" LEFT OUTER JOIN "Block" ON "Block"."ID" = "BlocksetEntry"."BlockID" GROUP BY "BlocksetEntry"."BlocksetID" ) B ON "A"."ID" = "B"."BlocksetID" ) A, "File" WHERE "A"."BlocksetID" = "File"."BlocksetID" AND "A"."CalcLen" != "A"."Length" took 00:00:00.062
Jul 21, 2018 11:10 AM: Starting - ExecuteReader: SELECT "CalcLen", "Length", "A"."BlocksetID", "File"."Path" FROM ( SELECT "A"."ID" AS "BlocksetID", IFNULL("B"."CalcLen", 0) AS "CalcLen", "A"."Length" FROM "Blockset" A LEFT OUTER JOIN ( SELECT "BlocksetEntry"."BlocksetID", SUM("Block"."Size") AS "CalcLen" FROM "BlocksetEntry" LEFT OUTER JOIN "Block" ON "Block"."ID" = "BlocksetEntry"."BlockID" GROUP BY "BlocksetEntry"."BlocksetID" ) B ON "A"."ID" = "B"."BlocksetID" ) A, "File" WHERE "A"."BlocksetID" = "File"."BlocksetID" AND "A"."CalcLen" != "A"."Length"
Jul 21, 2018 11:10 AM: Starting - VerifyConsistency
Jul 21, 2018 11:10 AM: UpdateChangeStatistics took 00:00:00.078
Jul 21, 2018 11:10 AM: ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-AD6444F9E601E1429293630409F0B697"; took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-AD6444F9E601E1429293630409F0B697";
Jul 21, 2018 11:10 AM: ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-7374017816D3294998A1557BA7E41C70"; took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteNonQuery: DROP TABLE IF EXISTS "TmpFileList-7374017816D3294998A1557BA7E41C70";
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-7374017816D3294998A1557BA7E41C70" A, "TmpFileList-AD6444F9E601E1429293630409F0B697" B WHERE "A"."Path" = "B"."Path" AND ("A"."Filehash" != "B"."Filehash" OR "A"."Metahash" != "B"."Metahash") took 00:00:00.015
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-7374017816D3294998A1557BA7E41C70" A, "TmpFileList-AD6444F9E601E1429293630409F0B697" B WHERE "A"."Path" = "B"."Path" AND ("A"."Filehash" != "B"."Filehash" OR "A"."Metahash" != "B"."Metahash")
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-7374017816D3294998A1557BA7E41C70" WHERE "TmpFileList-7374017816D3294998A1557BA7E41C70"."Path" NOT IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?) took 00:00:00.015
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "TmpFileList-7374017816D3294998A1557BA7E41C70" WHERE "TmpFileList-7374017816D3294998A1557BA7E41C70"."Path" NOT IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?)
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" != ? AND "File"."BlocksetID" != ? AND NOT "File"."Path" IN (SELECT "Path" FROM "TmpFileList-7374017816D3294998A1557BA7E41C70") took 00:00:00.015
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" != ? AND "File"."BlocksetID" != ? AND NOT "File"."Path" IN (SELECT "Path" FROM "TmpFileList-7374017816D3294998A1557BA7E41C70")
Jul 21, 2018 11:10 AM: ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-AD6444F9E601E1429293630409F0B697" AS SELECT "File"."Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = ? AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID" took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-AD6444F9E601E1429293630409F0B697" AS SELECT "File"."Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = ? AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID"
Jul 21, 2018 11:10 AM: ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-7374017816D3294998A1557BA7E41C70" AS SELECT "File"."Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = ? AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID" took 00:00:00.015
Jul 21, 2018 11:10 AM: Starting - ExecuteNonQuery: CREATE TEMPORARY TABLE "TmpFileList-7374017816D3294998A1557BA7E41C70" AS SELECT "File"."Path", "A"."Fullhash" AS "Filehash", "B"."Fullhash" AS "Metahash" FROM "File", "FilesetEntry", "Blockset" A, "Blockset" B, "Metadataset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "A"."ID" = "File"."BlocksetID" AND "FilesetEntry"."FilesetID" = ? AND "File"."MetadataID" = "Metadataset"."ID" AND "Metadataset"."BlocksetID" = "B"."ID"
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash" took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash"
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash" took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) A, (SELECT "File"."Path", "Blockset"."Fullhash" FROM "File", "FilesetEntry", "Metadataset", "Blockset" WHERE "File"."ID" = "FilesetEntry"."FileID" AND "Metadataset"."ID" = "File"."MetadataID" AND "File"."BlocksetID" = ? AND "Metadataset"."BlocksetID" = "Blockset"."ID" AND "FilesetEntry"."FilesetID" = ? ) B WHERE "A"."Path" = "B"."Path" AND "A"."Fullhash" != "B"."Fullhash"
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?) took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?)
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?) took 00:00:00.015
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?)
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?) took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?)
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?) took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ? AND "File"."BlocksetID" = ? AND NOT "File"."Path" IN (SELECT "Path" FROM "File" INNER JOIN "FilesetEntry" ON "File"."ID" = "FilesetEntry"."FileID" WHERE "FilesetEntry"."FilesetID" = ?)
Jul 21, 2018 11:10 AM: ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < ? AND "ID" != ? ORDER BY "Timestamp" DESC took 00:00:00.000
Jul 21, 2018 11:10 AM: Starting - ExecuteScalarInt64: SELECT "ID" FROM "Fileset" WHERE "Timestamp" < ? AND "ID" != ? ORDER BY "Timestamp" DESC
Jul 21, 2018 11:10 AM: Starting - UpdateChangeStatistics
Jul 21, 2018 11:10 AM: FinalizeRemoteVolumes took 00:00:00.109
Jul 21, 2018 11:10 AM: CommitUpdateRemoteVolume took 00:00:00.015
Jul 21, 2018 11:10 AM: Backend event: Put - Started: duplicati-b38415587b47143b1a72bff386de3e403.dblock.zip.aes (1,13 MB)

Hi @djbill, welcome to the forum!

Can you confirm you’re getting the same “stuck on ‘completing backup’” situation that @Ivan reported?

It looks like @warwickmm, @kenkendk, and @Pectojin have all updated the MegaBackend.cs code in the last 6 months - but if you haven’t done any updates it seems like you are correct that they have made a change at their end.

Just out of curiosity, does the “test destination” feature still work? Also, if you create a small test backup (just a few files) to mega.nz does that work?

Hi, my actual version of Duplicati it’s “2.0.3.3_beta_2018-04-02”; when I posted I had an old version of Duplicati “2.0.2.1_beta_2017-08-01”

Now with the lastest version I have the same problem, the job stuck on “Waiting for upload…”

I did a new backup job whith 3 files with size about 15 megabytes. This job ends sucessfully.

"Test destination” button works fine on all of my jobs with mega.nz backend.

Can I send you some kind of log for analisys?

Yesterday I did a new backup job and transfered 9,80GB of data to mega.nz (2012 files), after that the job stuck on “Waiting for upload…” Maybe I reached the quota for a free account. I modified the default size of chunks to 10 MB when I created the new backup job.

maybe the trouble with mega is in connection with this:

MEGA surreptitiously sliced up the 50GB program in 2017. You still get 50GB for signing up, but only 15 of those gigabytes are yours forever. The other 35GB expires after 30 days.

I’m seeing something similar with mega.nz using Duplicati’s built-in mega storage connector.

My backup was working fine. I encountered errors. IIRC around 15/07/18. I cannot remember the version I had been using at this point but had been so for some months. I upgraded to 2.0.3.3_beta_2018-04-02 but still see the issue.
I have a free mega account but have yet to get anywhere near 15GB. What I have observed:

  • create a new, small test backup to mega. It works - no erros. However the next incremental backup fails. (Stuck at waiting for upload IIRC and no new files created at mega storage.)
  • using the rclone (1.42) backend works fine.

I don’t suppose setting --upload-verification-file=false helps at all like it did over here?

Yesterday and today my backups with mega.nz works fine. I only change the size of “Upload Volume size” from 50Mb (default) to 10Mb

I had a look in my history. The first failed job (which was endless running) was also on 15/07/2018! I had the problem for 5 days until I changed the the upload-verification setting on the 20/07/2018 for all mega jobs and since then no error. About 40 successfull backup runs on my various mega accounts in this 8 days.

I have very old accounts with 50GB (permanent) and some new accounts with 15GB permanent. All accounts have minimum 2-5 GB free space.

I have been using the standalone rclone backend storage connector and don’t encounter errors now.

How does rclone method differ, in terms of files created at mega, from the built-in mega connector?
Does the rclone method attempt to upload verification files?

With the common date above, and as John suggested in the other thread, it seems to point to a change at mega end. One that perhaps affects the built-in client but not the standalone rclone?

it seems, that with 10 MB size it is working again…

It sounds like a few different things appear to have helped resolve this issue including:

  • changing dblock (Upload volume) size from 50MB to 10MB (this may indicate Mega.nz is having issues processing “large” files)
  • setting --upload-verification-file=false (this may indicate Mega.nz doesn’t handle replacing existing files well, similar to a known issue with box.com)
  • switching from “mega.nz” storage type to “rclone” (this may indicate an but or performance/load issue with the Mega.nz API)

All of these imply the issue is at their end, but we don’t really have any way to “prove” that.