Metadata inaccurate when restarting failed backup

When rerunning a failed backup, I end up seeing the following log message for files that were already backed up:

Apr 1, 2020 12:30 PM: Checking file for changes [filename here], new: True, timestamp changed: True, size changed: True, metadatachanged: True, 03/27/2020 04:11:59 vs 01/01/0001 00:00:00

There is one of these log messages for every file.

Then the backup starts inspecting the blocks from the already-backed-up file, all of which are already present in the backup target, and skips each block one at a time. I imagine this process would go a lot faster if the metadata for each file in the database was accurate, letting Duplicati skip processing the file outright.

Why is this metadata inaccurate, and is there anything I can do to fix it so that it is accurate in the future? I’m backing up terabytes of data and only a small amount is expected to change each night, so Duplicati is spending a lot of time processing files that it does not seem to need to process.

Details: I’m running Duplicati using Docker on a Synology NAS and mounting the volume that I’m backing up as a volume in the Docker container.

Welcome to the forum @awkspace

Nothing says new: False, timestamp changed: True? Those are sometimes easier to explain, especially on Linux as a one-time nuisance in transition from old mono with a lower time resolution. Recreate can also get those. It resets resolution to seconds. Was that a part of restart after failure?

If everything is coming up as a new file (which is not the focus of the descriptions), I’d wonder what shape the previous backup was in. Was a file that logged as new: True not visible in Restore tree?