Full backup optimization in Duplicati v2.0?

Hi,

I’ve been using Duplicati for years and I’m very happy with it. So far I’ve been using v1.3.4 (not moved to 2.0 yet) and I wonder if v2.0 has improved/optimized the way it does full backups. My case is simple: I just have a huge “C:\Doc” directory with many subdirectories and files, everything spanning some GB. The problem arises when Duplicati runs a full backup against my remote server through a low-speed Internet, taking a lot of time. As far as I have learned about how incremental backups work, full backups are performed to “rebase” the backup to a full-known state. However, what if among all the stuff I have in “C:\Doc”, only few files are added/modified in a regular basis? Is there any way in 2.0 (or maybe in 1.3.4) to prevent Duplicati from making full backups of stuff that almost doesn’t change?

Thank you!

My understanding of Duplicati was, it always backups only the files that have changed. So after the initial full backup, it will only do incremental backups.
However while backing up (full or incrementals) de file are chopped into chunks, if a “new” file has certain chunks already uploaded by other files, those chunks won’t be uploaded/backupped again.

When deleting files, Duplicati will delete the files, however chunks still in use by other files in the backup archive shall not be deleted.

When you restore files, Duplicatie will get all the chunks needed for one file together and recreate your files.

So no need to prevent any full backup from being made…

1 Like

So it’s basically an implementation-related issue? I was wondering if there was a way to handle my case in a most optimized manner. After all, what I’m requesting is not a crazy idea: if 99% of a “backuped” directory is not changed anymore, what’s the point of performing full backups of it? I’m just wondering! :slightly_smiling_face:

Thank you anyway!

Duplicati 2 does not do this (unless you don’t want a full backup even to start with). From the manual:

Features

Incremental backups
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.

Block-based storage engine (which is down now, so here is an archive.org copy) explains drawbacks regarding the Duplicati 1 design, and how the Duplicati 2 redesign @AntMar covered improves that.

Use Duplicati 2. Unfortunately you can’t migrate from Duplicati 1 because redesign is hugely different.

I think your reply answers my question. I will give a try to v2 as soon as it stops being beta.

Thank you!

I am a (mostly) happy user of Duplicati 2 for years, and to me it looks like the developers tend to imitate google in leaving a well working product in beta state for indefinite time :wink:

1 Like

I’ll give it a try then :stuck_out_tongue_winking_eye: