I’m running Duplicati as a server in a Debian based Docker container. So I want to log backup job output to stdout and view it with docker logs.
I tried configuring the advanced log-file option to be /dev/stdout. But that gives me the following error message whenever I try to perform an action. (Like a database repair.)
The stream does not support seeking
A) Is there a better way to get the log data to a location I can use it than this?
B) Is there a better way to send logs to stdout from within Duplicati?
I’m using the duplicati_2.0.3.3-1_all.deb file downloaded from duplicati.com when installing.
Thanks for any tips!
Edit: I did see this topic, but there was no resolution.
Are you trying to create a file called /dev/stdout or are you trying to get the normal output to show up in a shell (which seems kind of odd to me if running in a Docker container)?
What is the ultimate goal here - just to log the runs somewhere or are you wanting actual output from the backup job?
I believe that the docker logs command pulls it’s data from the stdout and stderr of the container. At least that seems to be the case with the httpd and php:7-apache containers I’ve used. They always have a special entrypoint that runs Apache in the foreground. (I also think you can configure it to be different with different logging plugins, but I have not experimented with that yet.)
I want to be able to run docker logs duplicati_container and see the log output from duplicati.
If you echo blah > /dev/stdout then “blah” is printed to stdout. So I was hoping setting the log file option in duplicati to /dev/stdout would make duplicati print it’s logs to stdout.
It sure does - now I know what you’re trying to do plus I learned something new about Docker containers.
Unfortunately, I think the --log-file parameter literally tries to create a file there.
The topic you referenced was my musings on making the GUI easier to use by having normal stdout output sent to a file so even if it had been implemented (other than via the suggested pipe) it wouldn’t do what you’re attempting.
So the ultimate issue here is that the only thing the Duplicati server normally sends to stdout is Server has started and is listening on 0.0.0.0, port 8200.
@kenkendk or @Pectojin , with the new logging functionality how hard would it be to make stdout a potential log “destination” (complete with the new filter functionality)? (Wait, has this already been covered in another topic?)
And now that I re-read it I see there is a --console-log-filter that showed up in 2.0.3.2 so SHOULD be in in 2.0.3.3. While expect that would be useful if you could get content to the console the issue here is getting NOTHING there.
Yes and no. In the GUI you’ll find logs in the main menu “About” section which are general server logs while the ones you get from the job “Show logs” link are specific to that job.
The job specific logs are stored in the job database while the generic logs are stored in the main database.
But I agree - the issue is that the service doesn’t spit anything out to console / stdout. With the new logging code it’s probably not a difficult thing to add - for the right person (which isn’t me right now).
I’m not seeing anything similar with console or stdout searches at GitHub so maybe adding an issue there might get some traction from other developers…
Oh - and here’s a super-hacky possible workaround…
What if you called a “command line” version of the backup from a --run-script-before-required script then aborted the scheduled GUI backup.
Granted, you’d have to use “Export as command-line” to get it into a script (and remember to do that if you make any changes) but it might do want you want…
I’d need to kill the duplicati-server instance so that the new duplicati-cli command could take over stdout, at least I think… Then restart the server instance when the cli was done… I’m not entirely sure how Docker would handle that… I believe it depends on the entrypoint script being active, so if it was killed (the entrypoint script is the duplicati-server instance) then I would expect the container to die as well.
For now, I’ll just set --log-file to point to a location mounted from the host. That way the log file doesn’t get deleted if the container is restarted. It’s not ideal, I’d rather have every log from the container going into one location so that my remote logging tools can easily pick it up, but it’s better than nothing.
I’m also considering just running every job as a one-off via GitLab-CI’s scheduled jobs. That’d give me the output of the jobs in the GitLab UI. I’d only need to turn on a duplicati-server instance if I need the restore wizard.
For viewing logs after the job is done, I’d think that would work. Not so much for while the job is running. Like if something odd is going on and the job is taking way longer than normal… It is a good idea to keep in mind though.