I upgraded duplicati on my CentOs 7.7 server from 2.0.4.5-beta to 2.0.5.1.
Now my scheduled backup runs in an endless loop and never finishes.
In my verbose log file I can see the backup start as expected:
2020-01-28 02:30:54 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: Die Operation Backup wurde gestartet
Then it runs through the normal stages, “Backend event: List - Started”, “Backend event: List - Completed”, some “Backend event: Put”, “RetentionPolicy-StartCheck”, then it deletes old files (“DeleteHandler-DeleteResults]: Deleted 2 remote fileset(s)”), and checks the remote files.
Then it starts all over again:
2020-01-28 02:50:41 +01 - [Information-GetGpgProgramPath-gpg]: gpg
2020-01-28 02:50:41 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: Die Operation Backup wurde gestartet
2020-01-28 02:55:17 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started: ()
2020-01-28 02:55:28 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed: (2,65 KB)
and so on.
The backup is scheduled to run once a day at 02:30 AM.
When I click on “Stop after this file” it does stop but the server crashes.
The backup it has made does show up in the “Restore files” list, but the “Last successful backup” info on the main screen is not up to date.
When I delete the backup schedule and run the backup manually it does run the backup once, then the server crashes.
Interesting. If you DO have the schedule in place, does it crash between backups? Maybe Duplicati crashes before it can record that it completed the backup job, and when the service (automatically?) restarts, the backup job is immediately triggered.
I don’t really have an idea as to why it’d be crashing at that point, but can you confirm which version of mono you are using?
Ihave something similar on
2.0.5.1_beta_2020-01-18
and found it was the Destination folder was unreachable (S3 in my case).
Backups ran endlessly one after the next. Not sure if there should be a check of some kind to stop this
hope it helps
My destination folder (on a Microsoft OneDrive v2) is reachable. Files are written every time a backup runs. I can restore from it, too.
I added a second backup for testing: same destination, configuration newly created in 2.0.5.1-beta, less files to backup. This one runs fine. It shows 1 warning but no server crashes, “last successful backup” timestamp is shown correctly on the home page.
It seems to create a core file with every crash when I run the old config:
[…]
rw------- 1 root root 681M 28. Jan 13:44 core.29268
-rw------- 1 root root 699M 28. Jan 14:03 core.31866
-rw------- 1 root root 742M 28. Jan 14:21 core.2619
-rw------- 1 root root 509M 28. Jan 14:37 core.5379
-rw------- 1 root root 703M 28. Jan 15:12 core.7671
My mono version is:
Mono JIT compiler version 4.6.2 (Stable 4.6.2.16/ac9e222 Mon Jul 31 05:33:23 UTC 2017)
from mono-core-4.6.2-4.el7.x86_64
It’s actually messier than that, as the autoupdate doesn’t use an RPM install.
There are also other packages that aren’t RPMs that “should” get updated…
This is complicated by the person who knows packaging not being available.
In a perfect world, everything would have been covered. The actual notice is:
Autoupdater definitely complicates things. But I agree that the rpm should have the dependency updated. I submitted a change earlier this week to have the deb package updated.
I’m not familiar with rpm packages but looks like it’s just this line that needs changing:
If anyone looks into the Red Hat install, note that duplicati-binary.spec also has a reference to mono 3.
EDIT:
Another problem with some distros, especially the “Enterprise” or “LTS” ones, is they have old mono…
Probably declaring a dependency and having it fail is at least a step less mystifying than runtime crash.
I’ll have to look into it, but the current Synology package installs even if Mono isn’t installed at all. It’s as if there is no dependency set at all.