Running scripts with sudo

Hi,

I’m trying to mount a drive before running a backup (the drive is normally in a sleep state so needs to be unmounted for this). I am trying to use run-script-before to run the command “sudo mount /dev/sda1 /mnt/NAS” however this isn’t working because of needing sudo:

[Error-Duplicati.Library.Modules.Builtin.RunScript-InvalidExitCode]: The script “/config/mount.sh” returned with exit code 127: /config/mount.sh: line 3: sudo: command not found

But I am unable to run the command without sudo:

[Error-Duplicati.Library.Modules.Builtin.RunScript-InvalidExitCode]: The script “/config/mount.sh” returned with exit code 32: mount: /mnt/Backup: must be superuser to use mount. dmesg(1) may have more information after failed mount system call.

Is there a way to run scripts requiring sudo? I am using docker compose:

duplicati:
image: lscr.io/linuxserver/duplicati:latest
container_name: duplicati
environment:
- PUID=1000
- PGID=1000
- SETTINGS_ENCRYPTION_KEY=xxx
- DUPLICATI__WEBSERVICE_PASSWORD=xxx
- TZ=Europe/London
ports:
- “8200:8200”
volumes:
- /mnt/NAS/Config/Duplicati:/config
- /dev/sda1:/dev/sda1
- /mnt:/mnt

Hi, welcome to the forum!

Your problem is that the script doesn’t have knowledge of the path to “sudo”. You can either find a way to give it that knowledge, or the simple way out is to use an absolute path to the sudo command instead of just “sudo” - as in “/usr/bin/sudo” (which is the case for me, but theoretically could be somewhere else for you)

L

Apologies, I forgot to say I had already tried that. I bound my /usr path with:

volumes:
  - /mnt/NAS/Config/Duplicati:/config
  - /usr:/tools
  - /dev/sda1:/dev/sda1
  - /mnt:/mnt

So my mount command becomes:

/tools/bin/sudo mount /dev/sda1 /mnt/Backup

However, when I run it with this it says:

[Error-Duplicati.Library.Modules.Builtin.RunScript-InvalidExitCode]: The script “/config/mount.sh” returned with exit code 127: /tools/bin/sudo: error while loading shared libraries: libsudo_util.so.0: cannot open shared object file: No such file or directory

libsudo_util.so.0 is located in /tools/libexec/sudo and therefore should be accessible by the container as can be seen with:

docker-compose -f “/mnt/NAS/Config/Docker Compose/docker-compose.yml” exec duplicati ls /tools/libexec/sudo
audit_json.so libsudo_util.so.0 sudoers.so system_group.so
group_file.so libsudo_util.so.0.0.0 sudo_intercept.so
libsudo_util.so sesh sudo_noexec.so

https://www.linuxserver.io/support has some support channels. We do mainly Duplicati here.

At first I thought that was the configuration in Duplicati, but I suspect you have a script helper.

Current Duplicati 2.1.0.2 Beta (which may or may not be what you run – look at About) has a

–run-script-with-arguments (Boolean): Enable script arguments
This option enables the use of script arguments. If this option is set, the script arguments are treated as commandline strings. Use single or double quotes to separate arguments.
* default value: false

Without that, you have to name a script. This can then have fancier lines. I suspect you do this.

For what? I don’t have such a drive, but don’t drives spin up on access normally, e.g. mounted? Avoiding the mount would avoid this whole chase.

I think I found posts saying volumes: is for mounted volumes and devices are done differently

In general, I don’t “think” running host code in container is advisable, due to compatibility risks.

An executable only looks in certain places, not everywhere that might be readable. More here:

3. Shared Libraries

Since the container may be a different Linux distro than the host, other mismatches may occur.

EDIT:

If the only issue is sudo, then you can get root in other ways (with added security worries), e.g.

[quote="mattdd50, post:1, topic:19636"]
- PUID=1000
- PGID=1000
[/quote]

sometimes has to be 0 simply to get enough access to do what a backup program needs to do.
If you change, be very sure to use a good password on the web UI, and keep it off the Internet.

I think I misunderstood. After rereading and your latest response, it sounds like you want to initiate an action (to mount /mnt/NAS) in the operating system outside the container from inside the container. Is that the case?

I know this is tangential to your question, but if that is the case, why isn’t /mnt/NAS just always mounted on the host operating system vs having to mount it “at will”?

L

I am mounting a portable hard drive that unless I explicitly put it into a sleep mode will just always be powered on and spinning. So I need to unmount it each time when not in use to be able to put it into that sleep mode.

I think I am going to give up on this and try a different method

I will ask here for further help.

Thanks

I really need to work on my reading comprehension. Sorry to have you explain that twice. :slight_smile:

Going back to your challenge, off the top of my head I can think of three possible approaches:

  1. Think of this as two separate hosts. You can set up some form of remote script execution initiated from within the container using something like “rexec”.
  2. With a quick google search (meaning I didn’t read too deeply) there IS a way to literally just run a command in the host OS from within the container. See this: How to run shell script on host from docker container? - Stack Overflow
  3. A bit hackish, but since you do have a shared (between outside and inside the container) file system you could “communicate” through it. Inside the container you could create a specific file when you want to mount the drive (e.g. “touch /mnt/foo”) then outside the container you set up a cron job that is a script watching for the creation or deletion of that file and when it sees it, it mounts your drive, and when it goes away, it unmounts it. Note that one issue with this is that your script inside the container would need to sleep, after creating the file, for long enough for the cron job to find the change and do the mount. So if the cron job runs every minute, you’d want to sleep for, say, 2 minutes.

Good luck!

L

1 Like