Duplicati on Synology via Docker?!

Hello everyone.

Is there a guide on how to properly install and access Duplicati via Docker container on a Synology NAS?

I managed to pull the Duplicati Canary in Docker and it does start and run. But how do I access its web UI from within the NAS or maybe even from an external PC?

And do I need to run a Mono container alongside the Duplicati container?

When I try to set the local port of the Duplicati container to 8200 Docker tells me that it is already used, even though the Duplicati container is the only one running?!

Obviously I am still confused, because I never used Docker before and also only just started looking into Duplicati.

I don’t think an official how-to has been written. Part of your confusion may be that you are new to docker. I’ve been there myself. Take a little time to learn about it and I think you’ll find it a great platform! Being able to install and upgrade software packages at ease without worrying about software dependencies or conflicts is so nice.

If you wanted to use a web browser on the NAS itself, you would point to http://localhost:port where port is the tcp port assigned to the docker container and mapped to 8200 within the container. And you should be able to access it by going to http://nas-ip:port as well, but that may require you to set the “allow remote access” option in Duplicati Settings.

No, docker containers are entirely self-contained and have all of the necessary software to run the application. It wouldn’t help anyway as containers are isolated from each other.

When you use docker containers and they want to listen on one or more network ports (like tcp 8200), it will be mapped to ports on the host (NAS). If the port is in use on the host, then it will be given an alternate port number.

Are you sure you didn’t actually have Duplicati still running directly on the NAS? If so it was utilizing tcp 8200 on the NAS, and your docker version would have been forced to have another port. Make sure you stop the normal Duplicati synology package (and perhaps uninstall it).

Here’s a screen shot of mine showing that it had no problem utilizing 8200 on the host:

0

Since you are new to docker some other important things you need to know:

Containers are designed to be disposable

When a new version comes out, you simply delete the existing container, download the new image version, and launch a new one from that fresh image. Export the container settings before you delete so you can easily spin up a new container:

Data stored within the container will be deleted (this is by design - docker way of thinking). So it’s important that you map certain folders out to the host system, so the data is stored outside of the container and will survive container deletions/recreations (like when you upgrade).

Here’s how I mapped the /data folder inside the container to docker/duplicati on the NAS. This causes things like the sqlite databases, etc to be stored outside of the container on the host NAS filesystem:

1

Containers don’t have access to the host by default

Again this is and intentional and important design of the docker architecture. Containers are supposed to be isolated processes that contain everything they need to run, and not be dependent on the host for anything.

In the case of a container like Duplicati, you’ll want it to have access to the host files so you can actually back them up. You do this in the same volume mapping area I talked about above.

On my NAS I have a single huge SHR2 volume mounted at /volume1. Synology’s Docker implementation doesn’t seem to let me map to that directly, so I have to map each of the top level shared folders that I want to back up:

3

Then within the Duplicati software I tell it to back up /volume1/Video, /volume1/Temp, etc like normal. (It just so happens that how I’ve mapped these folders makes the paths look the same on the NAS and within the container, but that’s not required.)

1 Like

Thanks for the quick guide!

If you wanted to use a web browser on the NAS itself, you would point to http://localhost:port where port is the tcp port assigned to the docker container and mapped to 8200 within the container.

Yes, but Synology does not seem to offer a browser package. How do you access the UI on your Synology/Docker setup?

And you should be able to access it by going to http://nas-ip:port as well, but that may require you to set the “allow remote access” option in Duplicati Settings.

Nope, but that is likely because I currently run it via the “Bridge” network? If I use the “host” network or setup my own then I suspect that I first have to set up some routing via DSM settings? No idea at this point.

I uninstalled it Duplicati when it kept crashing and now specifically checked via Resource Monitor that no Duplicati process was left orphaned and running. Looks a bit as if port 8100 was reserved by the one-time Duplicati installation and then left locked.

Thanks for explaining the mapping part. I already understood one side of it, but didn’t know that I also have to make data from within the container available to the outside world.

I will take another look at this later and maybe also try with a new Mono version and without Docker again. Both solutions have their pros and cons.

I don’t access it via the Synology interface - I only access it remotely from my PC. I connect to http://nas-ip:8200 and I get access to Duplicati running in the docker container.

That’s how I’m running it and it works. I wouldn’t use “host” network except in special cases. Double check what port the container has been mapped to on the NAS. You should be able to have it map to the same 8200 port.

Keep working on Docker, once you work out the kinks I think it’s the best solution!

Still no luck with port 8200, might need a restart (later). But I can finally access the container via another external port number. Something I was sure to have tried before, but maybe I just mistyped the number before.

Can you ssh to the NAS and do a ps ax | grep -i duplicati to confirm it really isn’t running directly on the NAS?

I remember some past bug in Duplicati where it wouldn’t completely shut down on Synology. Don’t remember which version that was fixed in…

6751 pts/22   S+     0:00 grep --color=auto -i duplicati
16105 ?        Sl     0:20 /volume1/@appstore/Mono/usr/local/bin/mono-sgen /volume1/@appstore/Duplicati/Duplicati.Server.exe

Yep, it’s still running. Do a kill 16105 to stop it. And then you should be able to use port 8200 just fine on your docker container!

v2.0.4.10-2.0.4.10_canary_2018-12-29

Fixed a process shutdown/restart issue on Synology, thanks @drwtsn32x

Fix Duplicati shutdown issue on Synology #3567

Fix should be in current somewhat buggy Canary and Experimental, or in next Beta when it comes out.

First installation was the Beta. I now installed the Canary and then uninstalled, hoping to get rid of the Duplicati process. No dice. Will do a NAS restart now, kill the process and try again.

Restart did it, got rid of it.

The kill command should have worked… strange

All good, I just did the restart without the kill command, because I planned on doing a restart anyway.

I am currently in the process of trying to move a backup to the Synology Docker that was cancelled half way in. I successfully exported/reimported the backup job and copied the local database over. Unfortunately I started a repair, before I put in the path of the copied database. And because the Canary cannot abort at once (only after current file) the repair kept going on.

Next I turned off the Duplicati Docker container to stop the repair process. Curiously I was not able to turn the container back on, with Docker claiming that the container file no longer existed on my volume1 drive. Tried a restart of the NAS, but same error. So it looks like I have to recreate the container, hopefully this will work by copying the old container’s configuration.

Is there no way in Duplicati to export/import the general configuration settings? I had lots of default advanced settings in there, but had to set them up all manually again.

I could copy the container settings, but not the content. So I have to set up Duplicati within the container all over again. How do I save Duplicati’s setting in a way that they survive a container switch (like to a new D2 container version)?

Did some reading and found the “Duplicati-server.sqlite” file. Now to find out if I can get that into the Docker container.

When I transitioned from native Duplicati install to docker Duplicati, I copied the *.sqlite files to the path on NAS that the docker container maps /data to and all configuration elements were there. I believe I did have to change the path to the job-specific sqlite files, but that was pretty easy. And since I mapped the native NAS paths how I did in the docker container, the relative pathing to the data to protect looked the same. So it was quite painless for me.

You don’t put it “in” the docker container. Remember, everything in the container itself should be considered disposable. the /data path inside the container should map to some folder outside the container. On my system it’s mapped to /volume1/docker/duplicati.

Screen shot:
Capture

You don’t want to export content anyway… just export settings. Exporting “content” means you’re probably not following best practices and are storing important data ‘inside’ the container.

Edit - and the reason to export is usually just to make it easier to upgrade to a newer version. Here are the steps I use when upgrading containers on Synology:

  1. Delete container from Image section
  2. Find the container in the Registry and download again
  3. Export the settings of current running container to a file
  4. Stop and Delete the current container
  5. Import the backup file from step 3
    a. This will create a new instance with all original settings
  6. Start the new container

Thanks again. Before I read your posts I took a look into the container via terminal to find the /data/Duplicati folder. Then I tried to map that folder to a folder outside the container.

Problem is: Once I copy the Duplicati-server.sqlite file from my Windows installation there the container keeps crashing.