Processing time for Compacting

First step is to figure out which type of tmp file does that. If SQL, see previous reply.
If Duplicati, then the challenge is how to copy parts of files without reads and writes.
If the destination is remote, then possibly the .zip files involved are also encrypted.

Do you see gaps in the reads and writes? I’m not sure how well downloads/uploads
overlap with the file-to-file copy. If badly, that might be a possible improvement area.

I/O performance to destination hasn’t been described at all, but could be a factor too.

For any processing time question, you have to try to find out what limits in your case.

Another generic improvement to many time issues is to use a larger blocksize, as for
large backups (over 100 GB), the default 100 KB makes millions of fairly small blocks
which slows down the SQL but also slows block copying in compacts (your concern).

Depending on your source file size and your destination, an extreme form (which will
probably degrade deduplication hugely unless files are identical – you may not care)
would be to set blocksize to remote volume size, thus there is less partial file copying.

Remote file containing only a wasted block is just deleted. No repacking of remainder.

You can’t change blocksize on an existing backup though, as it’s central to everything.

I already mentioned how one can make it run more often (reduce threshold), but it’s
unclear how that would change time per run, as threshold applies to each volume too.
There was talk on the forum of trying to separate the two thresholds. It’s not done yet.
There are far too few qualified volunteers.

Compact - Limited / Partial