Duplicati service stops running

Hello Guys,

I have duplicati running on a good amount of servers (281 to be exact). All of them back up the same thing. 2 SQL DBs and 4 folders. These folders can and sometimes do have 100s of thousands of files. I have noticed that sometimes the duplicati service will be stopped. I have tried setting the recovery settings to restart but that does not work. Any ideas as to why this is happening? It happens on about 5 - 10 servers on average per week. Not always the same server either.

Thanks in advance for your help!

You may be able to find the root cause of the crash using the --log-file argument when starting Duplicati on those servers.

By the way, are these Linux or Windows servers? The Linux systemd file provided on installation should have Restart=always defined, so it should automatically restart even if it occasionally crashes.

They are all windows servers. Can you provide guidance on how to add the --log-file arguement on startup?


I haven’t done much with Duplicati, on Windows servers, but I believe you can update it with the install command, if that’s how you originally configured it, adding the parameter afterwards:
Duplicati.WindowsService.exe install --log-file=somepath

If you configured it otherwise, eg in the service console, I’m not entirely if that command will update it or if you will need to add the parameter as an argument within the console. If it’s running as a regular startup program within a user profile you’ll likely need to edit that startup entry ( sorry these parts are a bit vague :confused: )

No that makes sense. I’ll give it a shot.

Did the Duplicati service start at last boot of those systems? If not, does System event log show errors? For my system, Service Control Manager shows timeouts at start at boot, though a later manual start works. My workaround was changing startup type to Automatic (Delayed Start), after the ServicesPipeTimeout to raise timeout from 30 to 120 seconds was not reliable enough. I just tested booting with Automatic plus Recovery settings at infinite Restart. One timeout and no service. Maybe a service must start before Recovery helps?

By the way, it looks like Duplicati.WindowsService (if it actually starts) watches and restarts Duplicati.Server, however it’s not clear to me whether your service started and then stopped (somehow) or never got started.

(Above helped, but not enough.)

(Above worked. I added analysis. If you like, look over your server Duplicati.WindowsService.exe start times.)

Custom service properties can get lost, so I hope a better fix happens someday. Thank you for the software!

1 Like

If you plan to leave --log-file running even after this is resolved, consider setting up a --run-script-before or --run-script-after parameter to rename / cleanup the log files.

If you’re using something like --log-file-log-level=profiling the logs can get pretty big pretty quickly… :slight_smile:

Also, are you using any additional parameters (like --webservice-interface) that might make Duplicati slow to start?

@ts678, thanks for the link to the GitHub issue. Assuming I understood it correctly, you “recently” posted that you think this timeout might be due to verifying updates before trying to start the most recent one.

Have you tested reducing the number of older versions in the updates folder or even doing a “base version” install of your currently running update version?

@kenkendk, if a user is running an older base version (let’s say canary) and has a bunch of versions in the updates folder (say,,,, and and they choose to do a new base version installation (say an MSI for what happens to the updates folder?

Do older versions get purged or ignored at startup or are we still getting the hit of verifying “updates” that are older than the base version?

1 Like

It never occurred to me that it may not be starting after reboot. I plan on changing to delayed start to see if that helps.

@JonMikelV Reverting to the base version by manually deleting updates fixes it. For every update I remove that is newer than the base (test starting with two or one), I remove about 80MB of file reads (disk time) and hashing (CPU processing). My original test PC is hard disk limited. My older SSD PC might be CPU limited. :frowning:

To prove my theory, I used a debugger to return from VerifyUnpackedFolder() before the read loop. It worked.

My experience with versions older than base is that the in updates for base wasn’t adding the usual increment of read load, but I’m not sure older versions never hurt. If nothing else, they use some space.

1 Like

Great detective work there, thanks for letting us know!

Maybe the update checker code can be updated to verify from newest to oldest and stop at the first viable update version it finds - that way there should “never” be more than one upgrade scan / verification needed.

Do you (or @Pectojin or @kenkendk) have any thoughts on that possibility?

@Omnicef, I’m sorry if I missed it - but are you running with many upgrades from your base version? If so and using delayed start doesn’t (continue to) solve your issue, consider moving some of the older ones to a different folder to see if that fixes it for you.

Thank you. The problem and the analysis have been cooking for awhile. I was wondering if anyone else was seeing the issue (not seeing reports) so I’m eager to hear what @Omnicef (and 281 servers) can conclude.

For anyone not wishing to test in Process Explorer, I found (and adapted) an easier way to check start times:

wmic path win32_process get caption,creationdate | find “Duplicati”

and wmic can supposedly even access remote systems, though I apparently do not have access set up right.
The start times only show how close one is to actually timing out. Duplicati goes away if it actually times out…

My SSD system takes less time to chew 80MB. Difference in times between its Duplicati.WindowsService.exe start times is around 11 seconds at boot (I think I’ve seen 14 before) and the CPU usage is about 3 seconds.

For my system with the mechanical drive, even one update is too much. Drive is fairly full, and model is slow (according to UserBenchmark) but it doesn’t seem to be failing. If it dies though, I do have some backups. :grin:

Thanks for the software and support.