Does Duplicati Overwrite .ZIP Files in Storage

That nice broad answer even covers unusual things like upload retries and The PURGE command. Typically, the frequent activities are backups, deletes per retention, and compact of remaining data.

There are some subtle points which one can glean from the author’s doc, but also from the manual.

Not immediately because typically some (or most) of the old data is still in either the new backup as unchanged files, or as the contents of a previous backup that is still around due to chosen retention.

Compacting files at the backend

When a predefined percentage of a volume is used by obsolete backups, the volume is downloaded, old blocks are removed and blocks that are still in use are recompressed and re-encrypted.

The COMPACT command

Old data is not deleted immediately as in most cases only small parts of a dblock file are old data. When the amount of old data in a dblock file grows it might be worth to replace it.

When compact collects partly filled dblocks, new one gets new name and old ones get deleted.
If some provider charges for download or upload, factor that in as well. The math gets complex.

Deleting files - excessive download said a lot about how compact works, and trying to optimize.
Cost optimisation on wasabi had some other thoughts on dealing with a retention charge policy.
That one was potentially a bit worse than a minimum duration, if it interacted with other policies.