Duplicati on Synology via Docker?!

I started a new backup from Docker now. First thing to notice is that the poor CPU of my DS218+ is completely maxed out while Duplicati works on .MOV files, despite those being listed in as already compressed.

4 mono_sgen processes run in parallel, while Duplicati is set to only use 1 concurrent compression process. Throughput currently is less than 1-2 mb/s without anything being uploaded yet.

…While I typed this CPU load dropped to idle, but RAM is filled up. The NAS likely is swapping memory pages on the HD now.

On a plus side: With the Docker installation I can set the network priority of only the NAS (or SSL from NAS) to low in my router, while keeping SSL on the rest of the network at higher priority.

I may have to do the initial upload via Windows PC and then move the backup over to NAS/Docker.

Even if it’s not compressing, it still has to do chunking and hashing, which is also CPU intensive.

I don’t think that will work. You can’t move a backup from one platform to another if the path separator is different. Maybe this limitation has been overcome by now…?

Duplicati chockes on a 670 mb MOV file. At first I suspected that case-sensitivity with the extensions might be the reason, but both MOV and mov do the same. I would expect little to no CPU load and RAM usage to happen when a MOV file is just stored without compression.

That’s not the case - hashing is CPU intensive as well.

Did it finish? I’ve been able to use Duplicati to back up hundreds of 1-4GB video files on my NAS. It may have taken a bit of time but it worked for me. And I didn’t even mess with compression settings.

No chance, the DS218+ comes with only 2 gb RAM. Due to heavy memory swapping the NAS becomes nearly unusable.

I switched back to the Windows PC and watched Duplicaty’s memory usage during handling that single mov file. It takes a long time while CPU load is maxed at around 6% and memory load increases to over 700 mb. No problem for the desktop PC.

Storing the single MOV file via 7Z (Deflate, store) uses less than 80 mb memory and takes just a few seconds. So if this really is a hashing bottle-neck, then it’s a comparatively big problem, because it uses so much more memory and time and disallows using Duplicati on the DS218+.

Gotcha. I am not sure how I did it. My NAS had only 4GB RAM at fist and no problems dealing with multigigabyte files. I wasn’t using docker back then but rather the regular Synology package (not sure if that makes much difference). I also tweaked almost no settings in Duplicati. Compression, block size, concurrency, volume size, etc were and still are at the defaults.

Going by the 600-800 mb numbers I saw on the PC (while that particular MOV is processed) I’d say that 4 gb vs. 2 gb makes all the difference.

Actually I was wrong… I just checked and original specs were 2GB RAM. I am thinking your choice of using 7zip might be part of the issue. It can compress better but it is more memory intensive.

I am not using LZMA for that backup, it’s ZIP/Deflate plus all big media files in that backup are in the list of uncompressible files. I just mentioned 7Zip, because I used it to do a quick Deflate/ZIP storage compression for comparison of how quick and especially how (non) memory intensive it would be.

Today I took another look at the problem, trying various settings that supposedly could lower the CPU and memory load: change hash method to SHA1, set blocksize to 400 kb, specifically turning compression method to “None” and compression level to “0” and a few more.

Nothing helped, the backup/NAS already starts to choke shortly after being finished with the file-scan when only JPG files are processed and then hit a wall once the first MOV file is reached.

But: A “Dry Run” works like a charm, going quickly through the JPGs and reasonably fast through the MOV file(s), as in the progress bar for the MOV file is even moving and at a reasonable pace.

So whatever happens in a real run vs. a dry run is what keeps my NAS from successfully creating a backup, or even really starting. Remember that this is all before any files are even uploaded, with an empty destination and empty database.

I find it easier to follow this update procedure (no export/import, but do so if you wish).

  1. Download the new ‘latest’ Duplicati image from the Registry
  2. When the download is complete, stop the Duplicati container and from the Action menu select Clear.
  3. The container will be removed and recreated with the updated image you’ve downloaded
  4. When the new container has been recreated, start it

FWIW, set your Volume as:

This is why export/import is unnecessary. Also you have an easy method to backup Docker files via Hyper Backup.

1 Like

Nice! That is definitely simpler! Just to confirm, selecting Clear doesn’t erase the container config?

Are you using LinuxServer’s container? I’m using the official Duplicati container and it doesn’t use those 3 mount point mappings, just /data which I map to /volume1/docker/duplicati on the host. It is working well for me!

If Clear doesn’t wipe out the container config then yeah I can totally skip the export/import steps. I actually back up some of the config files for my containers using Duplicati itself.

I don’t actively use HyperBackup. Last time I checked it didn’t support B2. Hopefully they will add support soon.

Have you tried using all default Duplicati settings? And is this with the SharePoint back end?

I put the configs on the NAS (see Volume settings as I posted). Nothing is lost in “Clear” (export the first time if you are unsure). I use this procedure w/all containers.

I am using the image from linuxserver/duplicati.

Pretty much the default features except the Volume settings (shown above) and I use port 8200 > 8200 (not Auto>8200).

Turn it ON and test it locally in browser address-bar… NASIP:8200… for example…

192.168.1.10:8200

That is what I started with. Yes, Sharepoint destination, but the backup does not reach the point of uploading anything.

Overall I find that Duplicati lots of memory on the Windows desktop, too. 600 - 800 mb seems somewhat excessive, even when it’s not a problem on the desktop (16 gb RAM).

yes, so do I… for the Duplicati official docker container, all important data is stored in /data which I map to the host NAS. I do an export/import just to keep all configuration elements (folder mappings, environment vars, etc) - but if that’s not needed then great! I will test next time there is an update to the container.

Maybe it doesn’t upload anything, but I wonder if it starts to talk to the back end and hangs at that point… Just to rule out SharePoint, can you try a different type of destination?

I completely missed the official container… probably because it is not the most “star count”. So I switched, mapping the container /data to the NAS. A quick test to B2 worked. Then I downloaded Canary “30”, stopped and cleared the container. When the new container appeared. I restarted it, and all was good. Updated version confirmed; previous test backup profile intact along with logs. Restore successful! Thanks for the “official” pointer.

1 Like