Access Host Docker commands inside Duplicati container

I am getting DB read access stating files are open when Duplicati backups. So I thought of stopping all containers excluding Duplicati (installed via portainer) and portainer, and backup rest of the directories of my apps installed.
I made a script to stop and another script to start my containers.

I tried to configure duplicati pre and post script RUN option to run these scripts. I also mounted my docker volumne to duplicate as follows. But the docker ps command is not recgonized inside duplicati container. How to fix it?

2024-12-08 18:16:19 +00 - [Warning-Duplicati.Library.Modules.Builtin.RunScript-StdErrorNotEmpty]: The script “/SparkyApps/scripts/duplicati_scripts/pre_daily_backup.sh” reported error messages: /SparkyApps/scripts/duplicati_scripts/pre_daily_backup.sh: line 17: docker: command not found


services:
duplicati:
image: lscr.io/linuxserver/duplicati:latest
container_name: duplicati
environment:
- PUID=0
- PGID=0
- TZ=Etc/UTC
#- CLI_ARGS= #optional
volumes:
- /home/sparky/SparkyApps/duplicati/config:/config
- /mnt/crucial/duplicati/tmp:/tmp
- /mnt/crucial/backups:/backups
- /mnt/:/source
- /home/sparky/SparkyApps/:/SparkyApps
- /var/run/docker.sock:/var/run/docker.sock # Add this line to mount Docker socket
- /var/run/docker.sock:/var/run/docker.sock

ports:
  - 8200:8200
restart: unless-stopped

Duplicati docker installation - Docker in Docker support
says how to run docker commands inside your Docker.

Docker commands works with following compose file. But still have two issues.

  1. It shows lot of error in the log. But my scripts seems to be running fine before and after backup as expected.(at least I couldn’t find any issues) apart from lot of error in the portainer.
  2. Before and After scripts are running even when I try to restore. Anyway to disable it and use them only for backup not for restore. Because they stop everything include NGINX and Adguard DNS home so I lose connection to Duplicati as well.

services:
duplicati:
image: lscr.io/linuxserver/duplicati:latest
container_name: duplicati
environment:
- DOCKER_MODS=linuxserver/mods:universal-docker-in-docker
- PUID=0
- PGID=0
- TZ=Etc/UTC
- SETTINGS_ENCRYPTION_KEY=SparkyDuplicati123
#- CLI_ARGS= #optional
privileged: true
volumes:
- /home/sparky/SparkyApps/duplicati/config:/config
- /mnt/crucial/duplicati/tmp:/tmp
- /mnt/crucial/backups:/backups
- /mnt/:/source
- /home/sparky/SparkyApps/:/SparkyApps
- /var/run/docker.sock:/var/run/docker.sock # Add this line to mount Docker socket

ports:
  - 8200:8200
restart: unless-stopped

#Backup commands for Daily
#–run-script-before=/SparkyApps/scripts/duplicati_scripts/pre_daily_backup.sh
#–run-script-after=/SparkyApps/scripts/duplicati_scripts/post_daily_backup.sh

Error from Portainer log:

is only available from another source

60

61

E: Unable to locate package btrfs-progs

62

E: Unable to locate package iptables

63

E: Unable to locate package openssh-client

64

E: Unable to locate package pigz

65

E: Unable to locate package xfsprogs

66

E: Package ‘xz-utils’ has no installation candidate

67

[custom-init] No custom files found, skipping…

68

sed: couldn’t flush stdout: Device or resource busy

69

Inside getter

70

sed: couldn’t flush stdout: Device or resource busy

71

**** Enabling QEMU ****

72

Connection to localhost (::1) 8200 port [tcp/*] succeeded!

73

Server has started and is listening on port 8200

74

sed: couldn’t flush stdout: Device or resource busy

75

Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha

76

Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm

77

Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb

78

Setting /usr/bin/qemu-sparc-static as binfmt interpreter for sparc

79

Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus

80

Setting /usr/bin/qemu-sparc64-static as binfmt interpreter for sparc64

81

Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc

82

Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64

83

Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le

84

Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k

85

Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips

86

Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel

87

Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32

88

Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el

89

Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64

90

Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el

91

Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4

92

Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb

93

Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x

94

Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch

run-script-example.sh shows how one scripts so that it runs only when desired, e.g. using

if [ "$OPERATIONNAME" == "Backup" ]

Also notice stdout, stderr, and exit code from script would control Duplicati or go into its log.
This makes it especially odd that sed doesn’t like its stdout, but what environment is this in?

Regardless, I’m not a Docker expert, and I don’t see any of the errors as being by Duplicati.
Most of the errors I searched in Google had some results. Some might apply to your usage.

Thanks a lot. I am able to run the scripts only for Backup operation now.
Only issue I have now is the errors in portainer. I also noticied it comes only when I use following environment variable. I will leave this open for someone with Docker in their setup to comment if I am doing anything wrong. Thanks a lot for your help.

  • DOCKER_MODS=linuxserver/mods:universal-docker-in-docker

That narrows this down well. So it’s not dependent on the script content?

That might be unlikely here. Few people read all of the topics, and probably fewer have seen this.

https://discourse.linuxserver.io/

might be a more likely forum, BUT I just Google searched “DOCKER_MODS” “binfmt interpreter”

[BUG] Docker in Docker does not work in linuxserver/code-server container #865

has similar error messages, but might also be a bit different. Anyway, asking LSIO might be more productive if this is purely a problem with LSIO stuff. Give them a nice clean test case to look at.

Thank you. I have posted in their forum as well. I also noticied this error comes only when I mount /tmp. If I remove the volume mapping from the compose, everything installs fine and no error in the log.
The reason I added “- /mnt/crucial/duplicati/tmp:/tmp” is that I am using my Duplicati to backup 400GB+ images & videos. Backup job fails stating tmp folder is full as my boot primary SSD is just 240GB. So I am using another SSD to mount “/tmp” which is 4TB in size to avoid this error. But adding this line causes issues to “- DOCKER_MODS=linuxserver/mods:universal-docker-in-docker”

I resolved the error by removing /tmp volume mapping and providing that path using advanced option in the backup configuration. Now I am all set. Thank you for your help.

Large size of the source should not need more tmp space unless you had tried to set Remote volume size on Options screen so high. Default is 50 MB. There is info there, and also a link:

Presumably tempdir.

If you chose a very large remote volume size, there’s usually another argument not to do so, explained at the link in its description. You may need multiple volumes to restore a single file because blocks may reside in multiple remote volumes. This may make for slowed restores.

Your use case of mostly images and videos might also merit larger blocksize because these possibly are already compressed (not always – some people use raw), so deduplicate poorly. Increasing blocksize reduces deduplication overhead in large backups, e.g. if over 1 TB total.

Restore speed testing if you are on 2.0.8.1 can be deceptive, as it gets source blocks if it can. Possibly you are on something newer, as I think LSIO keeps their latest tag quite up to date.

Good news overall though. Thanks for reporting it.

I have setup the remote file size to be 100GB. Is that not recommended? Soon I am going to setup another job to backup to backblaze. I have just 20Mbs upload speed. Should I leave it with 50MB? I thought it will create lot of files, so setup 100GB file size to reduce the number of files created in the backup directory.

Way too big IMO. See the article shown in the GUI, or below in new docs. Might differ a bit:

Remote volume size suggests 50 MB - 500 MB for a cloud provider. A local can take larger.

It’s true that some providers can’t handle lots, but I haven’t heard of such limitations with B2.

EDIT:

Using a big remote volume size is also how one fills up /tmp because the files are built there.
There’s a limit, but it’s defined in number of files, so big files can cause a big upload backlog.