Duplicat stops between backups or DSM kills it off, I recieve an email alert from Task Manager saying:-
Task Scheduler has completed a triggered task.
Task: Duplicati Service
Start time: Tue, 18 Jun 2019 19:32:14 GMT
Stop time: Sun, 23 Jun 2019 20:32:27 GMT
Current status: 9 (Interrupted)
Standard output/error:
I start Duplicati as a boot up task running the following script:-
#!/bin/sh
my backups run are scheduled at 03:00, 12:00 and 22:00 and none were active at the time of the backup.
This has started happening since upgrading the DSM firmware
by using Duplicati monitor i can see that the backups have stopped and I can restart Duplicati in the task manager when this happens.
I have download var/log/messages from the dsm (attached) but can not make any sense of it; either DSM is killing duplicati off or duplicati is exiting for some reason.
No idea where duplicati is storing its error logs as I only have access through the DSM interface being remote from the system if there is an option to store the error logs somewhere else that I can get to through the DSM let me know what it is and I will set that up for next time it occurs.
I have raised this as a case with Synology and will update with their findings
I have used Duplicati on my Synology NAS for almost two years, but I am just using Duplicati’s own internal scheduler for running backups. Have you tried that instead of using DSM Task Scheduler?
I only start Duplicati with the task manager on boot up so I can specify an alternate location for the datafolder and temp directory. Last time I updated the DSM it blew away all my configurations so I am future proofing against the next DSM upgrade.
The scheduleing of the backups themselves are all handled by the duplicati scheduler. at the time of the problem no backups were takeing place and non were scheduled by duplicati to start.
Synology reply was:-
Thank you for contacting Synology.
Unfortunately we do not provide support for issues with Duplicati as this is not produced or maintained by Synology.
Please help to contact the developers of Duplicati for assistance.
Yes you can’t keep your Duplicati data in the default location of /root/.config/Duplicati because it gets wiped out during major upgrades. Personally I just made a symlink to a folder in /volume1 - i have to recreate the symlink during major upgrades though. And I still have to start Duplicati manually when my NAS reboots (which is not very often). So I like your approach.
I wonder if DSM Task Scheduler just has a maximum runtime allowed. I notice your task was terminated 5 days and 1 hour after it started. Out of curiosity is it always that timespan? I tried searching for max runtime for DSM Task Scheduler tasks but wasn’t able to find anything.
no the time can vary and it was working fine before the last DSM upgrade.
all the task scheduler does is start the process at boot time, the way I have set up the DSM is that it mails me any events so the task scheduler is aware that the process was interrupted abnormally and I get an email to that effect.
I did add my email to the alerting option for that Task Manager job, but you know I have never seen anything yet. It’s the same alerting email address used elsewhere in Synology so not sure what the issue is.
I checked my NAS today and the RAM was exhausted. For some reason I had a gazillion “Threadpool work” processes related to Mono/Duplicati. Not sure if it has anything to do with starting the software using Task Scheduler or not.
I ended the Mono/Duplicati process and all the “Threadpool work” processes went away too. Started it again using Task Scheduler and will monitor.
@progers885 have you monitored RAM usage on your NAS?
I don’t recall there being any issues with RAM in the resourse manager when I use the remote DSM to login and restart the duplicati task in the task scheduler however from what you say the RAM frees up when you kill mono so i would the RAM whould look OK to me.
I am remote to the site; the nas is being used in an architects practise and they have never reported any performance issues leading up to or when the problem occurs which I would expect if the RAM was being hammered.
I have to rely on a workstation being available for me to ssh to run htop. Luckily due to staff holidays I have been able to login today and ran htop and do not have any Threadpool work processes at all, which i would not expect to see as there are no active backups which is when the nas experiences the problem I have reported.
I wiil login more often and see if the RAM is being consumed and update this if it is