Duplicati docker loses connection in middle of backup

I am running duplicati in a docker on unraid server and I have been having this chronic issue. After successfully doing a complete back up of all my files, I run a new back and after backing up six or seven files, duplicati crashes and the “cannot connect….” message appears.
Eventually the docker just remains in this frozen state. I cannot connect to it, I cannot stop the docker, nothing.

I will wait overnight check the next morning and the docker is still frozen.

The only solution I have found is to disconnect the mounted drives its backing up to, put the back server to sleep, then try to stop the docker and eventually (hours later), it will stop.

Not sure how to fix.

Any help you can provide will be appreciated.

Is there a docker file you can share? I can then give some testing a try.

I’m curious too. I am interested in switching to a docker-based Duplicati but not sure whose container is best. I want one that is updated pretty quickly when new Duplicati versions are released…

All I can say is install unraid server and then install the docker.

Do you recall if this was a docker hub image or you created a dockerfile yourself?

The one pushed by Duplicati as part of its release should qualify. Anyone know the release scripts well?

https://hub.docker.com/r/duplicati/duplicati is probably where the push lands. Historically popular is this:

https://hub.docker.com/r/linuxserver/duplicati

Request: duplicati official dockerhub images #2309

Release: 2.0.2.20 (Canary) 2018-02-27

  • Added Docker images, thanks @fkrull
1 Like

Perfect, I didn’t realize there was an “official” docker image. Thanks

It seems like download page should point to that better. How’s your HTML? This might be the page source.

Hi @eric90066, welcome to the forum!

I’ve been using the official Docker image with Unraid for about a year and don’t recall ever running into the issue you describe.

Of course in using Canary versions of Duplicati on the latest version of Unraid and backing up to cloud and array drives.

When you talk of disconnecting the mounted drives, do you mean USB drives? Is so, I wonder if the drive is being put to sleep causing the Duplicati connection error.

Hi JonMikelV,

No, not usb, I am backing up to another nas using unassigned devices.

Are you using smb transfer to network drives for your backups? Are you using compression? I am running unraid 6.7.2.?

The issue I am having, happens repeatedly. In the middle of the backup, it just stops connecting, the page will not load.

To make matters worse, it took out my unraid server. By that I mean, the docker seizes up, the docker and vm pages do not load (the dockers are still functional).

On one occasion I had to reboot the server to get things back to order. It’s pretty bad.

With the new canary release I took this opportunity to switch to using the docker container on my Synology NAS. So far so good.

Only minor annoyance is that Duplicati seems to think all files have been “touched” so it’s reprocessing everything. (But it realizes it doesn’t need to create any new dblocks on the back end…)

Looking at the logs I see things like this:

Sep 5, 2019 8:37 AM: Checking file for changes /xxxxx/yyyyy/zzzz, new: False, timestamp changed: True, size changed: False, metadatachanged: True, 07/27/2019 13:27:00 vs 07/27/2019 13:27:00 

It says Timestamp has changed, but shows the same value X vs Y. Any thoughts?

Possibly it is considering milliseconds. We’ll need to debug it to see what is happening in the compare.

Yep, I was thinking that. I’ve made some changes to the code so that a full timestamp will be shown in the log. Trying now…

Did you do a repair? I think I observe this behavior after a repair, but could also be something else that triggers this.

Nope. The docker container is using the existing databases. I took care to make sure the original paths that were being backed up are available to the container in the same locations. So…Duplicati should speed through this first backup as a docker container without knowing any different.

I got busy earlier but am now testing to see if the timestamps are exactly the same. They should be, they are the same underlying files after all!

Yep, the fractional second portion is indeed different. (I’m using this format string: yyyyMMdd'T'HHmmssffffK)

Sep 5, 2019 1:31 PM: Checking file for changes /xxxxx/yyyyy/zzzz, new: False, timestamp changed: True, size changed: False, metadatachanged: True, 20190727T1327009416Z vs 20190727T1327000000Z 

20190727T1327009416Z - timestamp of file
20190727T1327000000Z - timestamp in database

Question is…how did this happen. Maybe Duplicati running directly on Synology could only get truncated filetime resolution… and now running in a docker container it somehow has access to higher resolution timestamps? Who knows.

I guess I have a couple options - let it reprocess all the data, or adjust the code to dismiss differences in sub-second timestamp variations…

After thinking about it I’m just going to let it reprocess all the data. I don’t like the idea of artificially reducing filetime resolution in Duplicati itself.

Can you point me to the code where the time comparison is occurring? Thanks.

FilePreFilterProcess.cs:92

var timestampChanged = DISABLEFILETIMECHECK || e.LastWrite != e.OldModified || e.LastWrite.Ticks == 0 || e.OldModified.Ticks == 0;