I’m running Duplicati 126.96.36.199 at ArchLinux, launched as a systemd user service.
I recently added some folders to my remote (Amazon S3) backup set and now Duplicati says:
S3: 145148 files (430.11 GB) to go at 825.98 KB/s
while it also says:
which is also the more realistic number.
Why is the first number so terrribly off? Or does it mean something entirely different?
Is the backup job perhaps still running? I’ve noticed the “Source” size won’t update (after more data is added) until the backup job is completed.
BTW, what is the total size of the folders you recently added to the backup set?
The backup is still running.
But I found the problem. I used du to get the disk usage of the folders to be backed up. The partition is btrfs. Probably due to the COW nature of the filesytem, du reported just 4.2GiB (actual disk usage), for what is 400GiB (reserved space).
Does duplicati does deduplication? On file or block level?
Duplicati performs deduplication on the block level (the Duplicati blocks, not the disk blocks).