So today was the day that the backup completed, I think. I got an I/O error upon completion. Not sure why, but at least I know that it has uploaded a couple of TB of data first.
I updated to the canary build mentioned above by @kenkendk, and added “–use-block-cache” (true) to the config. I also restarted both VMs involved (OmniOS serving the data, and a Ubuntu Server doing the backing up) for good measure.
Anyhow, I started the backup again – and it’s been sitting on “Verifying backend data …” for a few hours. Currently it is consuming 100 % CPU of one of the virtual CPU cores in the VM.
Also:
- Logging is set to “Warning”, but I have “profiling” on in my browser – and it seems like the log file is getting the profiling information, because they are growing fast when stuff is happening. I have a 12 GB log file of the previous backup.
- the “QZDJRHUBWM.sqlite” file is 5 GB
- If I figured out this correctly, it’s 1.982.566 files in the directory that is getting backed up (minus a few that are excluded in the backup)
Is there a way I can try to debug why it is taking so long? Can I provide some specific data? Do a test? etc?
(I have been running this on an alternative system with a similar setup as well, but against a different backend – JottaCloud. A lot less data – a few 100 GB, not multiple TB. But I have had no problems. And it’s a lot faster)
EDIT:
It’s done verifying backend, and now I see a lot of:
Oct 2, 2017 11:32 AM: Checking file for changes /path/to/file.psd, new: True, timestamp changed: True, size changed: False, metadatachanged: False, 1/8/2013 12:30:04 PM vs 1/1/0001 12:00:00 AM
Where it says that the timestamp has changed, but it has not. And it is comparing the timestamp to “1/1/0001 12:00:00 AM”, and that makes Duplicati think that everything is new.
Both the Ubuntu VM (via NFS) and directly on the OmniOS VM says the file date is “Jan 8 2013”. The time/date on both systems are correct.
Does this make sense to any of you?
(Status as of now, reported by Duplicati, is: 273034 files (2.73 TB) to go at 1.06 KB/s
– that is about 1/4th of the progressbar)
EDIT2:
I’ve tweaked my NFS mount, so I now get ~450 MB/s write, and around 1GB/s write. I tested the write/read with “dd” following this blog post.
Write:
$ sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.40283 s, 447 MB/s
Read:
$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.03813 s, 1.0 GB/s
NFS mount options:
/datapool/Share from 192.168.20.15:/datapool/Share
Flags: ro,noatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,acregmin=1800,acregmax=1800,acdirmin=1800,acdirmax=1800,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.20.15,mountvers=3,mountport=47674,mountproto=udp,local_lock=all,addr=192.168.20.15
EDIT 3
After having a few issues with crashing Duplicati service, NFS locking up, etc. I finally completed a successfull backup.
- It took quite a while, and it spent most of the time setting timestamps. Duplicati did it in bursts from 10-30 files at a time, then paused a bit (maybe CPU limit?), and continued.
- I’m not sure what caused my NFS issues, but I solved with tweaking rsize/wsize and setting the MTU of the internal network to 9000.
- The Duplicati crashes are random, not sure what is causing those. But when it works it works.