I’m doing an initial backup that will take quite a few days to complete. I may need to do voluntary reboots in the interim.
I would like to be able to pause, reboot, resume, where resume means continue from where I left off. I don’t really want it to start from the beginning, even if that in theory its good at skipping work already done.
At my first attempt at this long backup it locked up after a day. The multiple mono_sgens did have the sql file open and nothing short of killing the server process worked, The restart showed it right at the beginning a day lost.
Irrespective of whether I can recover from a lockup, I do need to know if under normal conditions, I can pause, reboot and resume with no time loss.
By the way this in on Ubuntu 18.04
My only feedback here is that you need to keep in mind that Duplicati is sometimes very bad at visually indicating really how much of a backup is left to run, like in a situation where you’re restarting a half-completed backup job, because it doesn’t fully assess up-front which data needs to be backed up or not - so if it finds something to upload within the first few files it analyzes, for example, it’ll sit at “2% completed” until that file is done, then suddenly blow past a bunch that it finds are already complete, etc.
My suggestion in the mean time is that you consider starting off with a smaller backup set, then when it’s done, add more things, and gradually build up to a full set.
Welcome to the forum @kmand
I like the @drakar2007 plan also, because you get to choose what’s most important to back up quickly, and because of a quirk where restore can’t be done at all until at least first backup has finished by itself.
Current canary can do a bit better, and has some bugs removed (but not all) around interupted backups.
After interrupted backup, Duplicati wants to start re-uploading everything! explains how topic title wasn’t actually what was happening, and points to what in theory should happen, but some doesn’t work yet…
I’d recommend the conservative plan of small increments that are allowed to finish. Awkward, but safer.
I’m in about the same situation as well - a big backup set, all subfolders of the same folder.
If I do what you hint here, I would have to select some subfolders and let it upload, then choose some more, until all of them are chosen and uploaded.
The problem is that over time more and more subfolders will appear and I would rather not explicitly add the new folders to the set of folders to backup in the job. Instead I would just add the main folder and let Duplicati backup whatever is in there with all the subfolders.
If I first individually add the subfolders and let them update, and then choose the main folder (and hence unchecking the subfolders), will this work? Will this not cause Duplicati to reupload everything?
I suppose this could be the case as the relative path will have changed…
Due to the volume of the data to backup I rather not find out the hard way
Everything’s effectively absolute paths, and even if a file moves it’s just deduplicated not uploaded.
There’s a new feature in Canary that might be ideal for your case, assuming that bugs on the initial implementation are ironed out – it’s believed they are but it could use some more real-world testing.
Using the Stop button and doing “Stop after current file” should produce a partial backup which can continue later, or be restored (if you need to for some reason) with whatever it’s backed up thus far. Controlling backup order (if it matters) would still have to be done by selecting subfolders in order…
Canary is bleeding-edge and can be unpredictable, however it’s currently attempting to get to Beta. Changing your update channel to Beta would then let you wait until it actually manages to get there.
Whichever method you choose, you can give it a try on a smaller area to verify it works as you wish.
I’ve done it; it will work.