Duplicati docker loses connection in middle of backup

The one pushed by Duplicati as part of its release should qualify. Anyone know the release scripts well?

https://hub.docker.com/r/duplicati/duplicati is probably where the push lands. Historically popular is this:

https://hub.docker.com/r/linuxserver/duplicati

Request: duplicati official dockerhub images #2309

Release: 2.0.2.20 (Canary) 2018-02-27

  • Added Docker images, thanks @fkrull

Perfect, I didn’t realize there was an “official” docker image. Thanks

It seems like download page should point to that better. How’s your HTML? This might be the page source.

Hi @eric90066, welcome to the forum!

I’ve been using the official Docker image with Unraid for about a year and don’t recall ever running into the issue you describe.

Of course in using Canary versions of Duplicati on the latest version of Unraid and backing up to cloud and array drives.

When you talk of disconnecting the mounted drives, do you mean USB drives? Is so, I wonder if the drive is being put to sleep causing the Duplicati connection error.

Hi JonMikelV,

No, not usb, I am backing up to another nas using unassigned devices.

Are you using smb transfer to network drives for your backups? Are you using compression? I am running unraid 6.7.2.?

The issue I am having, happens repeatedly. In the middle of the backup, it just stops connecting, the page will not load.

To make matters worse, it took out my unraid server. By that I mean, the docker seizes up, the docker and vm pages do not load (the dockers are still functional).

On one occasion I had to reboot the server to get things back to order. It’s pretty bad.

With the new canary release I took this opportunity to switch to using the docker container on my Synology NAS. So far so good.

Only minor annoyance is that Duplicati seems to think all files have been “touched” so it’s reprocessing everything. (But it realizes it doesn’t need to create any new dblocks on the back end…)

Looking at the logs I see things like this:

Sep 5, 2019 8:37 AM: Checking file for changes /xxxxx/yyyyy/zzzz, new: False, timestamp changed: True, size changed: False, metadatachanged: True, 07/27/2019 13:27:00 vs 07/27/2019 13:27:00 

It says Timestamp has changed, but shows the same value X vs Y. Any thoughts?

Possibly it is considering milliseconds. We’ll need to debug it to see what is happening in the compare.

Yep, I was thinking that. I’ve made some changes to the code so that a full timestamp will be shown in the log. Trying now…

Did you do a repair? I think I observe this behavior after a repair, but could also be something else that triggers this.

Nope. The docker container is using the existing databases. I took care to make sure the original paths that were being backed up are available to the container in the same locations. So…Duplicati should speed through this first backup as a docker container without knowing any different.

I got busy earlier but am now testing to see if the timestamps are exactly the same. They should be, they are the same underlying files after all!

Yep, the fractional second portion is indeed different. (I’m using this format string: yyyyMMdd'T'HHmmssffffK)

Sep 5, 2019 1:31 PM: Checking file for changes /xxxxx/yyyyy/zzzz, new: False, timestamp changed: True, size changed: False, metadatachanged: True, 20190727T1327009416Z vs 20190727T1327000000Z 

20190727T1327009416Z - timestamp of file
20190727T1327000000Z - timestamp in database

Question is…how did this happen. Maybe Duplicati running directly on Synology could only get truncated filetime resolution… and now running in a docker container it somehow has access to higher resolution timestamps? Who knows.

I guess I have a couple options - let it reprocess all the data, or adjust the code to dismiss differences in sub-second timestamp variations…

After thinking about it I’m just going to let it reprocess all the data. I don’t like the idea of artificially reducing filetime resolution in Duplicati itself.

Can you point me to the code where the time comparison is occurring? Thanks.

FilePreFilterProcess.cs:92

var timestampChanged = DISABLEFILETIMECHECK || e.LastWrite != e.OldModified || e.LastWrite.Ticks == 0 || e.OldModified.Ticks == 0;

Thanks. And are there still unique issues of running Duplicati on Synology?

Not really, I just like the idea of using docker!

I only had one issue with the native Synology package but was able to work around it: Duplicati does not auto start.

I think this may be a dependency/timing issue on NAS bootup. The Mono package may not be started first, so Duplicati fails to start. (This is just speculation.)

Someone else in the forum showed me how they used the Synology Task Scheduler to launch Duplicati. You can create a task that is triggered at the “bootup” event. Using a bash script we can introduce a 1 or 2 minute delay to ensure Mono is started first, then launch Duplicati.

Some problems with this approach - there isn’t a clean way to shut down Duplicati. I have to ssh to the NAS and send a SIGTERM to the process using the kill command.

An advantage to this Task Scheduler approach is that I can pass custom command line options to the Duplicati process - something that seems to be impossible with the native Synology package.

On Windows I get matching values and with milliseconds.

I have a Synology I can test on.

Theory, but please confirm mono --version is still per below (from July 26, 2019). Docker mono is newer.

Mono 5.12.0 Release Notes

Added support for nanosecond resolution in file information on platforms where the information is available. This means the return value of APIs like FileInfo.GetLastWriteTime () is now more precise.

Synology seems to use btrfs or ext4. Both have nanosecond resolution. Use stat command to see it.

Great, would love to hear your results. I’m poking around the sqlite database created by the Synology Duplicati package and it looks like this:

Capture