I wrote a bit on this issue here:
Yes, but Duplicati cannot know which files are changed and which are not. It scans ahead to find the full size of files. The number reported is the size of the files it has not yet examined. Some of the remaining files may turn out to not be modified, but there is not way to guess that in advance. The stalling happens because the upload is active and too much work is in the upload queue.
I am looking into a better way to see how to communicate this.
If you choose “Log” in the menu on the rig…
And you are not the only one to request it:
I setup a fairly large backup of 500GB to start for the first time to a WebDav server in the cloud. As it’s big it hadn’t completed the first backup yet before i had to reboot the machine (Windows box). After logging back on the backup has restarted, but it seems to have started completely from scratch?! There is still nearly 100GB worth of Duplicati backup files on the remote server from the first attempt. Surely it should continue from where it stopped? What’s going to happen to the first set …
What could be done is using something like USN or inotify to monitor which files have been touched since the last backup, and then report this size instead of the folder size.