Resume backup after PC restart

Hello,
I would like to back up my PC. It will take several days.
Apparently after the start of my PC each morning the backup does not restart automatically.
Is there a feature to do that ?
Thanks

An interrupted backup is not automatically restarted when the interruption is resolved. However, at the next scheduled run time Duplicati will continue where it left off just before the interruption.

However, I believe a MISSED scheduled backup is started as soon as possible, so if you scheduled your backups for early in the morning (such as before you turn your machine on) then they’ll start right away. If they take longer than a day to run, then they’ll just continue the next day.

Alternatively, a frequently scheduled backup (such as hourly instead of daily) would recover “sooner” as well.

I know it’s not ideal, but keep in mind aggressive schedules like this can be used during initial backup then changed to something lighter (such as daily) once the log first backup job is done…

1 Like

Ok thank you.Your solution is fine for me, but I would like to install Duplicati on my mother’s PC and I am sure that she will not remove the schedule once the backup is finished.

Do you know if the development of this feature is planned?
Thanks

Not that I’m aware of. But if it were, how would you expect it to work? Things to consider include:

  • what to do if other job schedules were missed during the powered down time
  • what to do if the interrupted job itself had a missed scheduled during the powered down time

As far as your mother’s PC I’m afraid you’re stuck with options such as:

  • just let it run it’s normal schedule over multiple days and it will eventually get done
  • get remote access to your mother’s PC so you can go in and change the schedule once the initial backup is done (something I’ve found handy to have with all my family members for more than just backup maintenance…)

Thanks JonMikelV

Then the other jobs would have to wait for this job to be finished. Anyway in my case I have only 1 job for my whole PC

The interrupted job would restart for 2 reasons: restart of the interrupted job + missed sceduled. So here there is no conflict between instructions

Cheers

This feature is important according to me. Imagine the following case:

  • backup scheduled every 2 weeks
  • use of my computer 8 hours per day
  • I place a big amount of data in my computer, requiring 100 hours of upload

On the day a backup is scheduled, the upload will start. As my computer is turned off 8 hours later, the backup will not be restarted until 2 weeks where 8 hours more will be uploaded. With this pace of 8 hours every 2 weeks the full back-up will not be done before 6 months… This is a big issue.

With an automatic restart of stopped backups, the full backup would be finished after 12 days only.

Why must it be every 2 weeks? It is because you only want to store a backup for every 2 weeks?

A better solution for you may be to have backups much more frequently and then using the --retention-policy to clean up old backups. E.g. 2W:1D,10Y:2W to keep a backup a day for 2 weeks and then 1 backup per 2 weeks.

Then you don’t have the above issue and you also gain the benefit of having more recent backups if your machine fails 1 day before starting it’s backup and having to restore 13 day old files.

And to top it all off, your machine will finish it’s backup much quicker because it keeps up to date with changes every day and you won’t have backups take over 8 hours :slight_smile:

1 Like

Challenge accepted! :smiley:

I have a Raspberry Pi in a microsat in low Earth orbit that uses line-of-site microwave communication so orbital alignment with my terrestrial transmitter only happens bi-weekly.

But seriously, the retention policy suggestion is a good one however I agree that “resume interrupted backup” (including due to power loss, source drive disconnection, destination being unavailable, etc.) would be a nice feature to have.

With the recent job queue priority changes I think are coming in soon it might actually be easier to implement such a thing than previously. :crossed_fingers:

1 Like

I agree it’s a good idea to have some robust handling of scenarios where the user for example shuts down their computer without considering the backup task.

I’ve definitely interrupted Crashplan backups by just not caring and restarting the machine without checking… Users are just like that :wink:

But as always we should make sure we’re solving an actual problem and not just symptoms of bad configuration :slight_smile:

1 Like

Thank you for your suggestion of retention policy.
The problem is that I am using Duplicati with a cold storage. Therefore I have disabled the auto-compact option (as mentionned here Azure Blob - Using Archive Access Tier - #6 by kenkendk. By the way does the auto-compact really need to download the files? Can’t it use the local database instead? In that case maybe I can reactivate this option.)
And with the auto-compact option disabled I thought it was preferable not to do back-ups so often to limit the size of the back-up. What do you think?

I agree. When you do so with Duplicati you get an error message. It shouldn’t be the case. It scares the user whereas everything works well.

Auto-compact should only take effect when volumes need to be combined or updated. When that happens it must download the contents to combine and re-upload them, but otherwise I don’t think it needs to download anything.

The growth from each backup will largely depend on how much data is changed, so it’s hard to say as a general rule what works best for a given use case.

I think the error logs are a bit aggressive as we’re not stable yet. But yeah, interruptions that are caused by intentionally closing Duplicati or intentionally cancelling a job should not generate errors :slight_smile:

Thanks
My understanding is that if I add 1 new file every day, and I remove it the day after, Duplicati will create 1dlist, 1 dindex and 1dblock file every day. If I do the backup every 2 weeks it limits the number of files (I have no auto-compact due to cold storage)

Following this discussion I have created those 2 issues on Github:

Well, number of blocks depend on the size of the file. But if it fits within 1 block then only 1 block will be uploaded. Also, I think as long as it fits within 1 volume it will just upload that one volume (of 1 block). So yes, 3 files created each backup in that case, and with --retention-policy Duplicati could clean up these by just deleting those 3 files again without needing to do compacting because no other volumes are touched.

Thanks for the tickets, I think it’ll create a good foundation for moving Duplicati to a less error throwing application :slight_smile:

If “missed” backup jobs are automatically run, but at the same time an interrupted backup job isn’t, does that mean an interrupted backup job is counted as “not missed”? This seems like a mistake Without looking at the internals, it seems like Duplicati is only considering “job started”, when maybe it should be taking both “job started” and “job finished” into account, when determining whether there are any jobs to run upon start-up?

I believe it just looks at started time, so yes.

I think that’s a valid point, failed backups might have failed temporarily and then missed their backup window, which is a problem for backups with windows far apart. But we also need to be sure it doesn’t restart by itself when the user told it to stop and it also isn’t ideal that it crash loops and makes 200 backup attempts within an hour because of an error with the job.

1 Like

That had occurred to me too. My hope was that maybe these special considerations would be evaluated at (duplicati) startup only, thus perhaps avoiding the accidental infinite looping worries.

We need some kind of logic to handle both cases while also attempting, if at all possible, to recover from a failed backup attempt.

But it’s the knowing which case we’re dealing with seems to be the tricky part. Especially because it’s somewhat subjective :slight_smile:

1 Like

I would just like to add that I would find the “automatically restart interrupted jobs” very useful.

Thanks for a great program!