I have a duplicati docker installation and would like to find a way how to do a handshake between duplicati instance and host. Reason is, I need to run scripts on the host before / after the backup.
Something like run-script-before is not feasible - the script has to run on the host. In my view, this option does not make much sense in a docker environment.
Using duplicati-cli I do not like. I like the GUI, setting up the backups from there, and see the logs. Afaik there is no proper way to manually start a backup configured in the GUI from duplicati-cli.
Next try: Backups are set to run once per day. Once each day, the host runs the prepare scripts, and starts up the docker container of duplicati.
Problem: The host does not know when the backups are finished. So I have to wait for a (long enough time) before stopping the container and run the cleanup script. Question: Is there a way for the docker host to find out if the backups are running?
Tinker-Solution:
Each day the host runs a script that:
runs the “prepare” actions
Creates an empty “handshake-file” in a location accessable by duplicati.
Starts the duplicati container.
duplicati runs its backup jobs time triggered. The last backup job has a run-script-after that deletes the “handshake-file”
waits until the “handshake-file” is deleted
stops the duplicati container
runs the “cleanup” actions
Not nice, but it should work. A bit sensitive since the whole system breaks down once something goes wrong with the “handshake-file”.
Is there any better idea?
Further question: When I schedule 3 backup jobs like:
job1: start 00:05
job2: start 00:10
job3: start 00:15 (deletes the “handshake-file”)
What defines the order of these jobs? Will job 3 always be the last job, even if job1 takes 30 minutes to finish?
setup a webhook server on the host and create webhooks that trigger the required scripts.
Then duplicati can trigger the webhooks via run-script-before / run-script-after option (curl is available on the duplicati docker)
of course you have to evaluate the security risk - anybody can triggere these webhooks now.
There was a similar discussion here not that long ago - how to coordinate between the container and the host. It looks like you’ve gotten much further - you’ve got something working!
Given where you’re at, I don’t think you’ll find anything very helpful though!
Looks like a very similar use case. Thanks for the hint.
Do you know: When I schedule 3 backup jobs like:
job1: start 00:05
job2: start 00:10
job3: start 00:15 (deletes the “handshake-file” or triggers the “backup finished” webhook)
What defines the order of these jobs? Will job 3 always be the last job, even if job1 takes 30 minutes to finish?
Honestly, I don’t know. There’s a good chance someone else will, though.
While this might make the sensitivity you mentioned a bit worse, you could do something like have a handshake file for each job. Each job deletes its file when it completes, and then your trigger is seeing all of them deleted.
This was introduced with canary 2.1.0.106, using duplicati-server-util. The update adds the status command to show which backup is running.
It also adds the option to wait for a backup to complete. With this you could use crond to schedule jobs and wait for their completion.
The logic is to use a “task queue”. At the desired time, the backup task is put into the queue. If there is no active task, it will immediately extract the task from the queue and execute it.
Once a task has finished, the next task from the queue is started.
If the queue already contains the backup task that is being inserted, it is ignored, meaning the original queue position is held, and the new request to queue it is ignored.
The reason for this is to avoid cases where Duplicati starts, queues the task, and then shortly after re-queues the task due to the schedule time.
So, in general, the backups will run in-order, unless they take longer than the time between scheduled runs.
The queue is not persisted, so restarting Duplicati gets a clean queue, and jobs are inserted based on their scheduled time.
so just fyi, after a lot of tinkering, this is the solution I have came to. It involves homeassistant (which I have anyhow) and installing webhook on the file server (=docker host)
Every day on 0:00 the homeassistant server triggers an automation:
switch on backup drive (via switchable power socket)
trigger webhook script on file server that:
- mounts the backup drive
- waits 10s
- starts duplicati container
duplicati has all backups scheduled on 0:00 so they should run immediately
On 0:05 there is a "dummy backup" job scheduled which is only needed for triggering a script
run-script-after that triggers a shutdown webhook on homeassistant (just a curl command which is
luckily available in the docker container)
(relying the dummy backup job always runs last, even if the other jobs are not finished on 0:05)
The shutdown automation on homeassistant will:
- trigger a webhook on fileserver which will
- stop the duplicati container
- wait 10 seconds
- unmount the backup drive
- wait 10 seconds
- switch off the backup drive
Since there is a whole chain of events needed for this to work, add some failsafe (like adding a timeout that shuts down the whole stuff after 3 hours even without receiving “backup finshed” notification and you are good to go.
Of course, if you don’t need to power on/off your drive, you can skip the home automation server and directly trigger webhooks on the duplicati host for any other handshake.