I’ve reorganized the filestructure of what is being backed up, so upon backup it’s checking to see if these files exist in the backup already, presumably verifying that blocks with the same hashes already exist. Read speed is a steady 4 MB/s, with occasional stops. Reading the file myself is over 200 MB/s.
Current action: Backup_ProcessingFiles
Live log with it set to show up to profiling messages:
Jan 18, 2020 12:44 PM: Checking file for changes $path, new: True, timestamp changed: True, size changed: True, metadatachanged: True, 10/31/2018 8:06:59 AM vs 1/1/0001 12:00:00 AM
Jan 18, 2020 12:44 PM: New file $path
Jan 18, 2020 12:43 PM: Including path as no filters matched: $path
I’m running Duplicati - 2.0.4.23_beta_2019-07-14 in the linuxserver.io docker container on an unRAID server.
While the Live log does report multiple “checking for file changes” I’ve verified with “iotop and lsof -p -r 5” that this is the only file being read, and while 5 file handles for my RAM-backed tmp file are open for the duplicati, none are being written to. sqlite files are on a SSD.
Nothing else on the server is being read from or written to, and I’ve killed all other non-system processes as well.
If I just do a “dd if=$path of=/dev/null bs=4M status=progress conv=fsync” of that same file, it reads at over 200 MB/s.
I verified dd speeds both outside of docker and within docker via “docker exec -it duplicati bash”
The file itself is a compressed video file that has not changed, we’ll see if it also occurs on compressible data. top reports 85% idle, 10% iowait, mono-gen duplicati using 15% CPU. Plenty of free RAM, no swapping to disk, etc.
Thankfully this process doesn’t happen frequently, but I was planning on doing some regular full-remote-verifications of the backup files, presumably the same process would occur. It’s a rather large backup of family photos and videos.