With that number of files, it is hard to do anything manually. Generally, if a file is skipped due to error (permission denied, file locked, etc) it will generate a warning. If it is skipped due to configuration (filters, symlink policy, etc) it will not generate a warning.
Compare file paths
How do you count the number 140,000 files?
If you can get a list of all the paths that are in that number, you can compare it to the paths in Duplicati. To do this, get SQLiteBrowser (or a similar tool) and query the “File” view, like:
SELECT DISTINCT Path FROM File;
You can then export to csv or similar and compare with the list you are expecting. Hopefully there is a pattern in the missing files.
Log what happens
An alternative choice it to go with logging. Set the avanced options --log-file=<path to a file> and --log-file-log-level=Verbose and then run the job.
This will create quite a lot of noise but you should see each single path in there with some information, like Excluding path due to ....
You can limit it a bit with:
--log-file-log-filter=Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess*
What are the differences between those two? Same machine? Same version of Duplicati? Same user?
Best guess here is that you are using a large volume size and/or a large setting for asynchronous uploads and compressors.
The volume size is set on the last page of the backup configuration, otherwise with --volume-size and is default 50MiB.
The compressors and uploaders can be limited with:
--concurrency-compressors=1
--asynchronous-upload-limit=1
--asynchronous-concurrent-upload-limit=1
(The last one will be removed in the future, in favor of the previous one)
They all default to the CPU core count / 2.
As described in another topic, you get: volumes = compressors + uploaders.
If you have 10 cores and 1GiB volume size, you get roughly 10 GiB temporary files (assuming transfer is the bottleneck).