I am running duplicati in a docker on unraid server and I have been having this chronic issue. After successfully doing a complete back up of all my files, I run a new back and after backing up six or seven files, duplicati crashes and the “cannot connect….” message appears.
Eventually the docker just remains in this frozen state. I cannot connect to it, I cannot stop the docker, nothing.
I will wait overnight check the next morning and the docker is still frozen.
The only solution I have found is to disconnect the mounted drives its backing up to, put the back server to sleep, then try to stop the docker and eventually (hours later), it will stop.
I’m curious too. I am interested in switching to a docker-based Duplicati but not sure whose container is best. I want one that is updated pretty quickly when new Duplicati versions are released…
I’ve been using the official Docker image with Unraid for about a year and don’t recall ever running into the issue you describe.
Of course in using Canary versions of Duplicati on the latest version of Unraid and backing up to cloud and array drives.
When you talk of disconnecting the mounted drives, do you mean USB drives? Is so, I wonder if the drive is being put to sleep causing the Duplicati connection error.
No, not usb, I am backing up to another nas using unassigned devices.
Are you using smb transfer to network drives for your backups? Are you using compression? I am running unraid 6.7.2.?
The issue I am having, happens repeatedly. In the middle of the backup, it just stops connecting, the page will not load.
To make matters worse, it took out my unraid server. By that I mean, the docker seizes up, the docker and vm pages do not load (the dockers are still functional).
On one occasion I had to reboot the server to get things back to order. It’s pretty bad.
With the new canary release I took this opportunity to switch to using the docker container on my Synology NAS. So far so good.
Only minor annoyance is that Duplicati seems to think all files have been “touched” so it’s reprocessing everything. (But it realizes it doesn’t need to create any new dblocks on the back end…)
Nope. The docker container is using the existing databases. I took care to make sure the original paths that were being backed up are available to the container in the same locations. So…Duplicati should speed through this first backup as a docker container without knowing any different.
I got busy earlier but am now testing to see if the timestamps are exactly the same. They should be, they are the same underlying files after all!
20190727T1327009416Z - timestamp of file
20190727T1327000000Z - timestamp in database
Question is…how did this happen. Maybe Duplicati running directly on Synology could only get truncated filetime resolution… and now running in a docker container it somehow has access to higher resolution timestamps? Who knows.
I guess I have a couple options - let it reprocess all the data, or adjust the code to dismiss differences in sub-second timestamp variations…
After thinking about it I’m just going to let it reprocess all the data. I don’t like the idea of artificially reducing filetime resolution in Duplicati itself.