In need of stopping selected Docker containers before running Duplicati backup, as some database files might be open at the time of backup and will end up with corrupt database files.
I am able to run ./shared/docker-stop.sh and ./shared/docker-start.sh under the Duplicati docker console docker exec -it duplicati /bin/bash so, the bash is doing what I am expecting.
Duplicati is showing the following error:
The script “/shared/docker-stop.sh” returned with exit code 1 please use the link below to see the verbose error at the bottom of the link
Yes, I have 16 docker containers, which I have to stop about 9 of them to back them up properly, otherwise there are opened files and will give me a corrupt backup/restore.
from my original post
I am able to run ./shared/docker-stop.sh and ./shared/docker-start.sh under the Duplicati docker console docker exec -it duplicati /bin/bash so, the bash is doing what I am expecting.
What if you use an absolute path where you specify the script in Duplicati instead of a relative path? Relative pathing depends on current working folder so absolute pathing may work better.
As Docker containers require paths to be added or allowed, I have that /shared path for ALL my containers; thus the reason why /shared/docker-stop.sh usage. What is your example?
I meant to use a path that didn’t start with a dot. But according to your error message it looks like you did have Duplicati configured to use absolute pathing (starts with a slash).
Can you post your script? (Sanitize sensitive info if needed…)
Also when you run it manually did you check the return code?
So wait, you are running the script on the host machine? That’s quite different from trying to run the script from within a container itself. When you have Duplicati try to run that script, it runs within the confines of its own Duplicati container.
Indeed when I check my duplicati docker container it has no idea what the ‘docker’ command even is:
I know that it can be possible for a docker container to interact with the docker engine on the host. The ‘watchtower’ container (as an example) does that in order to update other containers and restart them. It requires mapping a socket from the host to within the docker container so that the container can communicate to the host docker engine API.
This is beyond my knowledge and experience with docker, but maybe that will help point you in the right direction to research further.
That’s why I have been saying that I am ABLE to run it from my Docker container console, via the command above mentioned and via Portainer’s console. For such docker --commands to work, the following needs to be added:
And THEN, and only THEN, you can execute ‘docker --commands’, which again, I am able to from the docker console. I used to have this working on my Debian box, but for some reason I can’t get it under my Ubuntu.
I got it to work with Ubuntu 19.10, but not with the docker engine available in the normal Ubuntu Application store. That seems to install some “snap” version of the package, which I am not familiar with at all. The docker binary isn’t located at /usr/bin/docker - see screen shot:
Regardless, I tried adjusting the volume mappings in the docker container to reference the snap binary but it didn’t work. Maybe you can get it to work with a bit more effort.
Instead I uninstalled the package from the Application Store and installed the regular docker.io from the command line (this is how I’m used to doing it on Debian):
# apt install docker.io
As a bonus, it’s a newer version than what’s available in the application store, and its binaries are in the normal location:
After I started the container, I launched a shell session inside the container using docker exec -it duplicati /bin/bash. From there I was able to see the running docker containers (of which I only had one on this VM). I could even stop the container from within itself:
So I believe this may be your best solution - although you will have to go through some hoops (removing the version in the application store, installing docker.io, and redoing your containers).
Final thought: the duplicati/duplicati docker container is the “official” version and updates are available immediately. linuxserver/duplicati lags behind a bit.
So I know this is very old but as you weren’t really given a solution, I thought, after having the linuxserver.io guys hold my hand through the fix, I would let you know what I did in case you were still having issues. So, while you were able to run the script when you exec into the container, if you just type docker exec -it duplicati /bin/bash that drops you in as root with uid:gid of 0:0 and you can run the docker start/stop commands and script successfully. The issue is that linuxserver/duplicati instance runs under the user ‘abc’ which if you try to run your script after exec’ing in like so: docker exec -it -u abc duplicati /bin/bash you will get a bunch of errors that access to the docker sock is not allowed. To overcome this issue you need to add a DOCKER_MOD to your linuxserver/duplicati image that will, upon container creation, find your docker gid on the host system and then assign user abc inside the duplicati container to the groupid of dockergroup with gid of your host system docker group (mine was 998).
You can run this script by addingDOCKER_MODS=linuxserver/mods:code-server-docker to your Duplicati environment variables in your compose or run. Bonus: You don’t have to map the volume for the /usr/bin/docker as the script does that too. You just leave the docker sock volume mount in. After that you can run the script after exec’ing in as user abc, as a test and then add the script to the interface in Duplicati using /share/script.sh, I think was your example.
Alternatively, you can just set the UID/GID of your container to start with the same number as the docker container on the host.
Here is a script that you can use to stop the containers AND have it only perform those functions when there is a backup being performed. I found out that Duplicati runs the before/after scripts even when doing something like a restore:
#!/bin/bash
# docker stop script
OPERATIONNAME=$DUPLICATI__OPERATIONNAME
if [ "$OPERATIONNAME" == "Backup" ]
then
docker stop <container_name1> <container_name2> <...>
else
exit 0
fi
Just make a second copy and change it to docker start ... of course for the “after” script.