A task was cancelled and other errors

Was this resolved? As @ts678 mentions, if nothing at all has changed, this will not make a new version, and as a side effect it will also drop all updated timestamps. Making a tiny change will make Duplicati persist the new values.

There is a related issue here, so any input on how to fix it would be appreciated:

@kenkendk Well yes, I did a tiny change and that fixed it.

What was the change? :smiley:

After the read-write-timeout=5min worked on a small backup, I tried something larger. This time with 30,000+ files of widely varying sizes in 7 directory structures. It is now failing again. I have increased the timeout value to 2 hours, and it still fails. So I am stuck with my main NAS backup non-functional. Again, the error is very generic, so I don’t have much more information. Any ideas?

Additional info. Duplicati is 2.1.0.3-beta-2025-01-22, running on EndeavourOS (Arch) desktop. Source directories are NFS mounts, destination is WEBDAV server. Only message is ā€œA task was cancelledā€.

@kenkendk Yeah, as @ts678 said: ANY change will do. I just added a small text file that I removed again after the backup was finished.

@Ned1 I have a very similar configuration (NAS with Duplicati on Docker, destination WebDAV) and I had very similar error messages after upgrading to 2.1.0.3-beta-2025-01-22 (Task cancelled…). At the moment everything works fine again - however, I cannot say exactly what solved it…

In short here’s the 2 things I did:

  1. Upgrade to version 2.1.0.4_stable_2025-01-31
  2. Deleted the local databases and repaired (resynced) them

To be a bit more precise I downgraded to 2.0.8.1_beta_2024-05-07 and deleted all local databases; then I repaired (resynced) each backup job and run 1 or 2 backups to make sure everything worked as expected; then I upgraded to 2.1.0.4_stable_2025-01-31 and everything was fine.
Again, I cannot say what exactly did the trick but my large jobs (>2TB with 100k+ files running for 2 years already) are working just fine now.

It’s working for me now! I deleted the remote destination data and deleted the local database. Then I re-installed EndeavourOS (Arch) just to make sure I was starting from a ā€œcleanā€ system. I installed Duplicati 2.1.0.4_stable_2025-01-31 from the AUR. Both my /home backup of about 1,500 files, as well as my NAS backup of about 15,000 files now work flawlessly! Certainly was a crazy error with very little clue from the error message as to the cause. I’m just really glad it is working now because I was not looking forward to implementing a replacement…

Your experience shows - just like mine - that directly updating a large, existing job to 2.1.0.3 seems to cause some problems and applying the 2 points mentioned above (using 2.1.0.4 instead and/or recreating the db) fixes it. We will probably never know why but at least we have a workaround…

Great, thanks for the feedback, then I think my fix will solve it for the next release.

You can usually get some additional information if you go to ā€œAboutā€, ā€œShow logā€. This should show the full stack trace of the error, and hopefully this reveals a bit more about the generic error message.

Also, you can set --read-write-timeout=0 to fully disable it, if you suspect it is causing problems (event with 2 hours).

Sounds like a hard reset, but good to hear that you got it working.