Duplicati stuck on canceling Job Info Box QNAP + Duplicati + Mega.NZ

Hi Guys, i wonder what to do if duplicati is stuck on that green info box about the progress of a Job. My Backup to Mega cloud was successfull but the green info box was stuck on " Checking files" for over 12 Hours and it was a small backup. So i decided to cancel, now it is stuck on “cancel the Job”.

Starting other jobs is possible i guess. Because I test started an other backup job and i can find the new job in the recover options. But the green Info Box still shows the other Job.

Also i think this is kind of a Problem for the schedule. The first job (info box stuck one) is shown as “never backed up” because it kind of never finished the “cheking files” state. So because of that, the next Backup would be today, which is nonsense. Schedule is 30 days.

Is there a clear option or a forced cancel other than the “X” on that info box? Also is there an Option for not checking the files after a backup?

-Edit: Even stopping the duplicati App on QNAP did not stop the “verifying data” message. or killing the process. The Info box switches from “verifying data” to “Stop Job” the " X" button and Stop simply does othing. Also the second backup i tried to start for a test did nothing.

Thank you

Hello

this is a very well known problem. Duplicati does not stop when it’s stuck in the last phase of emptying its buffers to the backend. When something bad happens in this step, it can’t be stopped by clicking ‘Cancel’. Most usually a network exchange has stuck, the root cause being that by default Duplicati is assuming perfectly reliable hardware/network by having a time out set to infinite. When a timeout is set, Duplicati will eventually abort, except that by default it is retrying 5 times, and some users are bumping this retry number to enormous numbers, with similar results.
So take a look at the live log to see if Duplicati is really doing something, if you see regular activity, you are in the second case. If you see nothing even after say, half an hour, you are in the first case.

Remedies: when Duplicati is stuck like that, there is not much hope of doing anything else than killing the job. Before doing it, disable the other jobs as they may start immediately when restarting Duplicati and you may want to set number-of-retries and http-operation-timeout advanced options before it happens.

Hi thanks for the quick reply. That doesn’t sound so good, I think then Duplicati will not be the tool I want to use. I need something that I can control at least in basic function reliably with a GUI. Unfortunately Duplicati is the only tool that offers Mega cloud. or do you happen to know another?

Thanks again

If the network hangs, the software waits for it to timeout and this works reliably provided that a timeout is set in the configuration. This is not obvious to set by default since the appropriate value depends on the network speed and the maximum block size, so it could either be too long or too short, that’s why it is not set by default. If you set a timeout, you will be able to stop Duplicati even if the network hangs. Not immediately, but after the timeout delay.

Did you rule out everything in the MEGA Installs and apps offerings? It looks like backup is like sync, meaning not as capable as a true versioned backup. If you web search for “third-party” at mega.io or around the web in general, you’ll find that they’re hostile to third-party tools. That limits your options…

Because there is no API or support provided to assist third-party tools, this leads to reduced reliability.

Mega page for rclone (which is a Duplicati storage provider option, if you’re daring enough) says this:

There doesn’t appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

Duplicati uses MegaApiClient which is probably subject to the same limitations. BTW they seem to use Timeout.Infinite as their default timeout, but it looks kind of like timeout can be set when creating client.

is possibly an escalation in their fight against third party tools, or possibly it’s just a meaningless reply.

Hmm ok thank you. Where do i set this time out in the backup config?. The step where i put in my mega credentials? http-readwrite-timeout & http-operation-timeout and the input value is Minutes?

yes, you can set these values in the step 2, backend parameters. Use a value appropriate for your network speed and block size, if you keep default value of 50 MB for the block size, if your upload is say 10 MBits/s (rather limited cable speed in many countries), that means 1 Mbytes/s, so less than one minute to transmit in normal conditions; so a timeout of 2 mn would be good enough.

Aaah i see. Already tried some test backups with time outs and it works better now, so far thank you a lot. What is it about the Block size? Thats the volume size at Step 5. right?. for my understanding this will split the backup in files 50MB of size. why not make it 1GB? Less files on the target volume but not so stable? what is the reason to leave it default or customize it?

correct

less deduplication, more time for each block, so more impact in case of network instability, and it could have adverse performance effects when compacting because Duplicati needs to download a whole block to get partial data, so possibly more data downloaded than with the default size.

I don’t see an obvious win in changing this default (an exception is the very specific case of a FTP server having a fixed limit of 2000 files that can be listed at a time - IMO in this case it’s better to avoid the provider altogether if there is no way to change such limit).

1 Like

Thank you so much, for all these informations.

Thank you, rclone with Kopia was also on my mind. After the clarification regarding time outs and block size, I continue to try duplicati

Duplicati’s confusingly similar two terms are getting into this discussion. Remote volume size links to Choosing sizes in Duplicati which would be worth reading. Maybe the GUI is trying to separate terms?

Remote Volume Size is dblock-size which is how big a load of blocks get put into a dblock file to store. Going too large is bad for network transfers, restores (whole dblock downloads if you need any block).

blocksize controls how finely divided the source files are for purposes of deduplication. Too small gets overloads of blocks, so performance suffers. Too large weakens deduplication. They’re two settings…

1 Like

Yes, thanks for the correction, I was distracted.