How to impose a minimum delay between two automatic backups

Hi,
My question is rather simple: is there a way to configure a minimum delay between 2 consecutive automatic backups (let’s say 1 day) ? For instance, when a backup is delayed because the computer to backup is off, I observe a first backup because of the missing previous planned backup, then a second one a few dozens of minutes later when it is time to a new planned backup to occur. The second one is useless to me.
Thank you for your answers

Welcome to the forum @Emmanuel1

One question is what part of it causes hurt, however here are some ideas to deal with it.

There’s no option for a minimum delay. When Duplicati is up, backups run per schedule.

There are two things you can do to avoid backups that are too close together. The easier method is a retention policy where you can define a minimum time between two backups, however that only reduces backups after discovery. In your scenario, you could make the second backup delete when backup after it sees it’s too close to missed one that ran late.

Retention policy describes that method.

Scripts allow skipping a backup based on an exit code, e.g. if backup is near one before it, which would save a bit of storage. Duplicati deduplicates data, so its savings would be low because probably little would have changed in a few dozens of minutes. You would save a dlist file which says what’s in the backup, but its size will vary depending on the backup.

REM The following exit codes are supported:
REM
REM - 0: OK, run operation
REM - 1: OK, don't run operation
REM - 2: Warning, run operation
REM - 3: Warning, don't run operation
REM - 4: Error, run operation
REM - 5: Error don't run operation
REM - other: Error don't run operation

Scripts are OS-dependent. If you’re interested in this, please say what OS you are using.

Thank you for the answer.
I didn’t know about the complete syntax for retention policy. Useful to spare storage.
For the script, sure there is a possibility. The OS is mostly Windows.

I am running into the same issue.

Could you provide a Windows script (for duplicati running as windows service) that will:

  • either pause whole duplicati for 8 hours? (Not ideal because it affects all jobs, but better than nothing).

  • Or perhaps reschedule the just finished backup job 8 hours into the future?

  • Or like you might have hinted at, to skip the job (though that could delay the next job longer than the set interval, which is not ideal.)

When enabled, jobs catching up when a machine was offline is desirable, but running a second one after (for example) ~2 hours simply because the original schedule happened to be on that time is undesirable.

There is another use case which goes even further here (4851).


The implementation is not in line with the functional requirement nor it’s expectation of the GUI. The GUI offers the switch the run a missed job as soon as possible.

A job’s schedule lets you set a time and date and when to repeat the job, the interval. For example 8 hours. If one sets the job’s interval to 8 hours, one should be able to expect the next job to run after 8 hours.

To run that job again (automatically) within that time interval, is not aligned with the job’s configuration. The current status quo is probably an implementation choice probably because it was easier to make. But is a technical/implementation design-choice with functional ramifications.

I would say that either the expectation need to be managed/corrected in the GUI or the workings of the system need to be extended/updated.

Though duplicati does a good job on deduplication and no space issues are experienced by the issue above, I experience unattended side affects of the above:

  • Meaningless report mails. When managing n-number of systems, every finished job triggers an duplicati-console e-mail. Now the relevant job e-mails are blended with the redundant ones. (This cannot be filtered or disabled because the first is relevant.)

  • In some contexts the total number of files in the target backup directly can be worrisome; structurally having non-meaningful snapshots taken and its dlist file does not help here.

  • The compacting stage can take longer when you use retention bucket that thins the snapshots. So it’s wasteful in time there again.

  • For local backups, set with higher test-backup-percentage, unnecessary (CPU) resources are used again during verification phase without it truly benefiting it’s overall goal.

  • For longer running backups, from user at 4851, “This way when the job finishes, it schedules the next run at now+period of time. This would solve many of my issues as some of my systems are just constantly running backups because they run over the period of time.


Background for those interested:

Ideally, we only want to run a job when it delivers the user backup value in an efficient manner and not be wasteful in any way. Because it is hard to make that assessment in an automated way, we settle on an interval system + catch-up system. So that is already an approximation of what we actually want.
However, within that interval system, that original desire for backup value, efficiency and not using unnecessary resources when it doesn’t benefit anything, does not go away.