Not in the traditional way you may be thinking of. There is no full backup followed by incremental chain.
Block-based storage engine explains how Duplicati 1.3 had that plan, but now Duplicati is block based.
Backup type is incremental is Duplicati’s author getting in on discussion of that, and other posts follow.
The idea “deduplicated full backup” is probably the most technically accurate. A file is stored as a set of blocks which possibly are already in the backup (for example if file was in previous backup run). Blocks which are already in the backup are referenced without re-upload. Blocks which are not present upload.
Initial backup is big because all blocks upload. Blocks are only considered waste if no backup version needs them. Retention setting reduces need by deleting backups. Eventually compact reclaims waste.
Later backups typically upload less because most files are mostly the same, so only new blocks upload. This might be more compact than incremental backups that backup a whole file if any part of it changes.
You can look in the job log Complete log
for BytesUploaded
to see how fast your source is changing.
Quite possibly you can save a lot of versions (if there’s some value) without using up a lot more storage.
Nope. All the blocks needed for all files in all versions will be saved because someone might want some version of some source file restored, at which time file is reconstructed block by block from backup data.