Retention policy with current stable 2.0.2.1 version?

I was wondering what a decent retention policy within the current 2.0.2.1 beta is given that I want to be as close to CrashPlan’s 2-3/hour backup and keeping enough history not to actually lose older versions for files that sometimes change quickly (e.g. when I am editing a file during a day, I might see 10 versions in that day, but I don’t want those 10 ‘intermediates’ to crowd out the files of the last weeks. Unlimited is not an option (I don’t have unlimited storage)

Of course, I’d be very happy if the version with the new retention policy would become stable :slight_smile:

Because of the deduplication, each version should not add that much to the back end storage. Depends on churn rate.

I am currently doing 6 backups per day and the back end storage is growing only very slowly. The biggest gripe now that I’ve been using Duplicati for a few months and have 600 versions is that it takes FOREVER to navigate the directory tree when doing a restore. I believe that has been fixed in a canary build by optimizing SQL queries.

I, too, look forward to paring down my version count with the granular backup retention option but am going to wait for the next stable (er, beta) release.

This obviously isn’t ideal but until the --retention-policy feature (introduced in 2.0.2.10 canary) makes it into a beta or stable version, you could manually (or via external CLI script) try using the delete or purge commands, though programmatically determining which versions to removed could be a pain. (Note that I believe this process is essentially what the --retention-policy parameter does.)

Usage: delete <storage-URL> [<options>]

  Marks old data deleted and removes outdated dlist files. A backup is deleted
  when it is older than <keep-time> or when there are more newer versions
  than <keep-versions>. Data is considered old, when it is not required from
  any existing backup anymore.

  --keep-time=<time>
    Marks data outdated that is older than <time>.
  --keep-versions=<int>
    Marks data outdated that is older than <int> versions.
  --version=<int>
    Deletes all files that belong to the specified version(s).
  --allow-full-removal
    Disables the protection against removing the final fileset

Or if you don’t want to wait for flagged-as-deleted files to be cleaned up during normal backup maintenance:

i.rackspacecloud.com/v2.0
Rackspace
    UK: https://lon.identity.api.rackspacecloud.com/v2.0
OVH Cloud Storage:
    https://auth.cloud.ovh.net/v2.0
  --openstack-region (String): Supplies the region used for creating a
    container
    This option is only used when creating a container, and is used to
    indicate where the container should be placed. Consult your provider for
    a list of valid regions, or leave empty for the default region.

Duplicati.CommandLine.exe help purge


Usage: purge <storage-URL> <filenames> [<options>]

  Purges (removes) files from remote backup data. This command can either take
  a list of filenames or use the filters to choose which files to purge. The
  purge process creates new filesets on the remote destination with the
  purged files removed, and will start the compacting process after a purge.
  By default, the matching files are purged in all versions, but this can be
  limited by choosing one or more versions. To test what will happen, use the
  --dry-run flag.

  --dry-run
    Performs the operation, but does not write changes to the local database
    or the remote storage
  --version=<int>
    Selects specific versions to purge from, multiple versions can be
    specified with commas
  --time=<time>
    Selects a specific version to purge from
  --no-auto-compact
    Performs a compact process after purging files
  --include=<filter>
    Selects files to purge, using filter syntax