Non-incremential backup and data archiving

Welcome to the forum @Axel

Features

Incremental backups
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.

Why is this seen as a problem?

What do you mean by “archive”? If you mean from Azure to elsewhere, that’s do-it-yourself if you can.
You should consider archiving the current local database if you’re going to archive the current backup.
I’m not sure what the goal is. Offline storage? Backup “redundancy” (consider multiple backup tools)?

The dlist files are not a backup (and are not nearly big enough). They list files from that backup and supply information on what blocks are needed to reassemble everything. Blocks are in the dblock files which also have an index in an associated dindex file. A given block can be referenced over and over, meaning there is what’s known as block-based deduplication. Any unchanged file uses existing blocks.

Block-based storage engine
The backup process explained
How the backup process works
How the restore process works

1 Like