Duplicati Docker container automatically reboots during verifying files


I have Duplicati installed as a Docker container on my Synology NAS. I have 4 backup plans and they were doing well, but since a few weeks one of them doesn’t work anymore. It seems the issue is in the file verification phase; when doing a “verify files” action separately I get the same issue. It begins nicely, but after a while my Docker container with Duplicati suddenly restarts, actually aborting my backup (or “verify files” action).

I must say it’s a rather big backup plan and I was still increasing it (source 2,20 TB; backup now 2,05 TB), although I do have a slightly bigger one which is working fine. Admittedly, the backup plan backups to OneDrive for Business, which makes it a bit more prone to a failure here or there, but till a few weeks ago all this just worked fine, although the size was slightly less than). Remote volume size: 50 MB; dblock-size: 3 MB.

I don’t see anything logged though; I guess logging only starts in a later phase?

Feedback would be warmly appreciated! Thank you!

PS: please don’t advise me to retry my backup plan. Because of the nature of this kind of backup plan to OneDrive for Business I had to build up the size in small pieces (otherwise I get a failure), making this an extremely slow process to get to the total size (or well, for now the “almost total size”). I could loose many weeks or even months, as such backups to OfB are very slow!


Are you limiting the RAM resources on this container? How much RAM does your NAS have?

Hi @drwtsn32 ,
I have 3 GB of RAM and a swap file of 2 GB. There is no memory limitation now on the Duplicati Docker container.
I have pulled out 1 big file from my data source (probably resulting in a total backup data source size a bit less than 2 TB, although I should doublecheck this) and now the backup plan works again. So I guess it must have something to do with 2 TB and/or with memory.

Sometimes my NAS becomes unresponsive as well, when doing a backup with this plan, but this could be the result of having no memory limitation on the container. A new attempt typically resulted in a successful backup run however. I have not enabled the aforementioned limitation, because I was afraid the container would have a shortage of memory every time then, making the backup plan fail every time. But now, when having a data source of more than 2 TB in total, it seems the container itself fails without having such limitation into effect (i.e., the container restarts during file verification, although the NAS keeps up and running well).

During file verification I can see the container uses more than 0,5 GB of memory, but it could be this increases after a while (haven’t found the time to keep an eye on this usage parameter, as file verification can take many hours in my case).