Old backup files not deleted

Hi I have modified my backup configuration with backup retention from “Smart backup retention” to “Delete backups that are older than” 6 months
After that I was expecting that all backup files older than 6 month would have been deleted.
This is not happening I have dblock and index files 1 year old still present.

Am I doing something wrong?

How can I get rid of those old files?

1 Like

I think that the documentation about retention is a bit confusing. An important precision is hidden in the middle:

the latest (most current) version of existing files will be kept unlimited time

In short, if you have a file that is currently existing and has never changed since 5 years, it does not matter that you are asking to not keep backups older than 6 months, Duplicati will keep it backed up.
After that, the disposition of data inside of your remote files is not something that can be easily predicted. It is possible that a remote file aged 9 months will be compacted away while a file having one year will be kept. It depends on the percentage of data in a given remote data file that is obsolete.

EDIT:

Thank you very much, that clarify everything about why files are still there.

But when I delete one of my files not modified after 2022 the backup file (dblock and index files) containing that old files are updated (hence reducing their size?)

Compacting files at the backend

The COMPACT command

Old data is not deleted immediately

In fact if you have more than one version, it hasn’t even aged out of the backup, so you can get it back.
The file won’t turn into wasted space until its version has been deleted due to your retention, then when wasted space builds up, compact runs. This doesn’t modify files, but repacks them into new, fuller files.

EDIT 1:

The backup process explained in the user manual tries to explain this with an example involving bricks.
How the backup process works is technical and explains things with file types (e.g. dblock) you refer to.

EDIT 2:

A dblock may contain blocks from many files. It is not updated when one of them changes, as that will create a tremendous amount of download/modify/upload work every time anything changes. Compact batches that work into less frequent runs, using some tuning options which you can set, if you wish to.

As explained by @ts678, that’s not immediate, unless you are deleting huge amount of data that are all making sure that remote files are completely obsolete. Duplicati will usually delay compacting, depending on parameters that you can tune. If you need urgently some free space this will not help much I know, however you did not say that so I’ll refrain to overwhelm you with the complexity of it all (that I don’t know so well myself as I always try to stay well below the space amount allocated to my backups anyway).

<job> → Show log which has a Compact phase report will give some information on what happened:

image

For predictive purposes (or fun?) About → Show log → Live → Verbose and then click Compact now.

Jan 10, 2024 9:57 AM: The operation Compact has completed
Jan 10, 2024 9:57 AM: Compacting not required
Jan 10, 2024 9:57 AM: Found 28 volume(s) with a total of 2.75% wasted space (266.14 MB of 9.46 GB)
Jan 10, 2024 9:57 AM: Found 3 small volumes(s) with a total size of 4.53 MB
Jan 10, 2024 9:57 AM: Found 0 fully deletable volume(s)

Long term monitoring in a log file can be set up, if you really want to, and log levels can be fine-tuned.

Thank you all for clarifications!