Hi all. Something very odd happened. A backup job that is about 10GB max. total suddenly ‘jumped up’ to 128TB during counting. Like literally from one second at 7.something to 128TB. See screenshot.
That exceeds the physical storage of the entire host system by far. I also don`t think it could be links or mounts to other servers as no machine in my network has 128TB. Is this normal? A bug in the beta? 2.1.0.3_beta_2025-01-22
(I am running Duplicati with :latest tag from Linuxserver repo. Btw. I installed it today and did not expect the tag latest to be a beta. But that`s another issue. If this current backup job is ever ending I will pin the image to a stable release number.)
It looks like Duplicati has found “something” that is reported as really large. Things like block devices etc are excluded, so it is not that. If it was an infinite hard/soft-link loop the number should keep increasing, but since it stays stable that cannot explain it.
It could be some kind of sparse file, where the file is allocated but not expanded on disk. If this is the case, you should at some point see the progress sticking to a single file for a long time. In this case, the file will be mostly zeroes and will not inflate the storage size significantly, but it will give you trouble when trying to restore the file.
It is certainly not normal. I have not heard anyone else reporting it.
If you can pin down what data is reported to have this size, you can exclude it, and we can possibly add some logic to Duplicati that excludes it automatically.
Because Duplicati has been so long in the beta phase, the :latest tag has been pointing to the beta release. Last week we got the first stable release out, so from now on, :latest will point to a stable release. The stable release is the same as this release, except from some changed timeout values for WebDAV, so no need to test if the stable release changes this.
Thanks for the detailed response appreciated.
Unfortunately I cannot identify a file or link that could explain this behaviour. I ‘solved’ it now by excluding the entire folder. Not 100% ideal, but also not a nightmare, since it contained only docker images that I could pull anytime from my repo. Backing them up was anyway overkill. All other backup jobs run as expected.
Very nice work btw. On my other machines I run duplicacy and now I consider migrating them over to duplicati.
Have you already tried isolating it to a specific file in the folder? A way to do this fast may be:
Configure a backup of the folder in a different job, and additionally use job Advanced options:
dry-run checkmark. This should keep it from getting stuck actually trying to handle a big file. log-file=<path> log-file-log-level=Verbose
After the run, see if any files in the dry run are showing a forecast way past the expected size.
Example output (with no surprise):
2025-02-03 09:01:04 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-WouldAddNewFile]: Would add new file C:\backup source\short.txt, size 118 bytes
EDIT:
Note: A test with Process Monitor suggests that it actually reads the file, so maybe get ready in Task Manager to kill the process if it gets stuck trying to read through (for example) 128 TB file.