Restore as fast+easy as possible (on new machine)... with db? How?

Hi,

I want to make the restore process as painless as possible. Restore to be made on a machine, which has no Duplicati.

Restore with configuration files already adds some convenience I think (have not yet used it). Because otherwise the exact path (and maybe prefix) on target has to be known, alongside with auth information.

Now since my backups are huge and with maaaaany files, restore from scratch takes… forever… and could be much faster, if a sqlite db from the would be included in the process.

Let’s say I have the sqlite db from the backup side. How can I include it in the restore process as easy as possible? For speeding things up… I guess, I have to create a backup set (by importing it from config file) and then point it to the database?

I’m thinking from the computer illiterate side (let’s say my wife needs to restore and I am unavailable), I guess they will not be able to make restore files after all…

Thx

the best way to go is to crate a new backup say “sqlite” with “main backup” database as source to alternate remote repro and scheduled it just a minute away from “main backup”.

export “main backup” and “sqlite” json config.
First Import “sqlite” json, rebuilt db and restore “main backup” database in a new system
Second Import “main backup” config, point database path to the restored “main backup” database and start restore.

Sqlite backup will be just a few MBs recreating a small database will take much short time compared to recreating the main backup database with is very huge in most cases.

2 Likes

Sounds reasonable, I will test it.

What if Duplicati would backup the respective database and json configuration alongside with any backup and use it temporarily on restore in a “Direct restore from backup files” scenario?

As a first thought, this sounds very sexy to me… :slight_smile:

The json config is possible to do (and can be enabled with a secret option, but not exposed as we have no great way to get it back out) .

The database is a different problem, because it can be of significant size. And it must be added after the backup has been made.

Thx for your reply… Yes, but if the database is of significant size, the backup probably is of a remarkably significant size (except maybe with many small files) :slight_smile: After backup - why not, it still would be beneficial, solves massive restore problems. I think some support for easier restore would be great. Especially the long time restores take.

Tapio, I had actually thought about doing that myself - basically making a second backup job that backed up ONLY the Duplicati folder (excluding the SQLite file that would be part of the just-created backup).

But in the end it seemed to be just as much work to restore the config backup and get it into place to then restore the data file backup.

I wonder if it would be possible for a job to, once a backup is complete, create a batch or executable file containing all the necessary commands to do a restore of the just-created backup (potentially excluding passwords & other security stuff, of course) that could then be “executed” for a quick-and-easy restore.

In theory, it could potentially even include a compressed version of the SQLite file for better performance. I guess I’d think of it as a ‘recovery app’.

While this is likely a pretty low need on the functionality scale (until people ACTUALLY need it, of course) I can think of one other request that this sort of functionality could satisfy. I’m finding the post now but somebody was asking how to provide a way for their family to restore backups of things once that person was “gone”.

This is exactly what I my thoughts are about. “Family” sometimes means ZERO clue about IT and therefore restore should be as easy as possible.

I want to write down for them:
“Here is an URL - here is a password - Go!”

not (just a theoretical scenario following :wink:

“Here is an URL - here is the user name - here is the pass - download some authenticator software - create authenticator id - download duplicati - here is an sqlite file - download sql editing software - fiddle in sql files - in some duplicati menu, point to sql file … import config - etc etc etc etc”

I mean, with real people problem starts already at downloading things. They download/save files and do not find the folder where they’ve put the download/save.

OK I guess you see what I mean. Another example: Apple users. 'Nuff said. :smiley:

Tapio, unfortunately “easiest” and “fastest” aren’t often the same thing. That being said, this sounds like a nice potential feature - hopefully somebody with some experience building these sorts of “batch” files can chime in and let us know if it’s even realistically possible - especially across multiple platforms (what if I use Linux, mom uses Mac OS, and dad uses Windows?). :slight_smile:

Yes, this would be great (also for all those Crashplan users who are used to just that kind of functionality).

In the mean time (while we wait for that someone who can implement it), I’m wondering what might be the best solution for a slightliy different scenario: what if the “new machine” is not really new but just different. Typical scenario: home computer and work computer. Bothe backed up to the same cloud with duplicati (different folder, of course). How to make it easy to restore a file from the work backup to the home computer?

This would obviously work there too:

But let’s say I’m also using dropbox on both computers. Instead of “syncing” the backup database and setup via a duplicati backup job, couldn’t it just be synced directly via dropbox? What potential security issues do we have here?

Right now the easiest way would to restore onto any machine different than the one from which the backup job runs would vary depending on the circumstances. For example:

  • if you have the config backed up from the old machine, I’d use the “Restore from configuration …” option from the “main menu” Restore and just reference the exported .json config file
  • if you do NOT have the config backed up, I’d use the “Direct restore from backup files …” option from the “main menu” Restore and just have to enter the existing backup’s encryption password & destination info

Note that both of the above will likely require rebuilding the local database before the backup can happen, so it may not be super fast depending on the size of the backup.

As usual “easiest” and “fastest” aren’t necessarily the same. It’s not quite the same but it makes me remember the old programmer’s axiom - “You can have it fast, cheap, or right - choose two.” :wink:

Are you responding to me or to the OP? Because I asked about

There is no

Isn’t the whole point of this topic to avoid just that?

I’m using “old” (instead of “work”) and “new” (instead of “home”) to make it easier for other readers to understand what’s going where.

You’re right - I wasn’t paying attention again, sorry.

For your scenario (Duplicati is already installed & you have the database) I would:

  1. Import the backup from the old (work) computer (otherwise manually create a new backup pointing to the old / work destination)
  2. Disable “Automatically run backups” on step 4 (Schedule) of the job (just to make sure we don’t start trying to backup two machines to a single destination)
  3. Use the job “Database …” menu so I can MANUALLY edit the “Local database path:” to point to the old (work) database
  4. Then click the “Save” button (enabled when the “Local database path” is changed)
  5. Accept the “Updating with existing database” message (if shown, only displays if the newly imported job has created a local database)
  6. Accept the “Existing file found” message (if you don’t get this message, then you’re not pointing to the existing old / work database)
  7. Do a restore as needed

If you plan to leave this “in place” so you can always / easily restore from one place to the other then I’d suggest:

  1. Change the title of the backup job to indicate it’s ONLY for restoring
  2. Remember to bring over the latest database each time you want to do a restore

Hm, I’m not sure I understand. Let me try to explain in more detail what I’m trying to do:

I’m backing up files on both computers and sometimes I’m looking for a file that should be there but isn’t. The other day, it was the SSH key for some VPS that I wanted to access. I knew I had saved it on my desktop but I couldn’t remember on which computer: at home or at work? All I knew was It wasn’t on my home computer. Did I delete it?

So I checked the backup. Not there. So this means it has never been there which means it must be on my work computer. Whether it’s still there or deleted doesn’t matter at this point because I’m at home and have no access to my work computer.

Thanks to Crashplan, all I had to do was check the backup of my work computer (either via the web interface or the crashplan client, neither of which care which computer you are on as long as you have the password for the archive you are trying to access). And there it was. Click restore. Done. Took less than 3 minutes and I was able to access my VPS.

I am trying to come as close to this with duplicati as possible. But so far, I’m quite far away from it.

So this is not quite the starting point:

Yes, I have Duplicati installed on both machines. But and a db exists on both of them, but only for the respective computer. I want to add the one from the respective other computer too (but only for restore purposes, obviously).

What I proposed above is the only way I can think of to do what you’re asking if you actually want to restore a lot of files without waiting for a full database recreation.

However, using the main menu Restore “Direct restore from backup files …” option will let you browse the backed up files without having to first create the database.

And if you want to restore SOME files, once you’ve selected them you’ll get a “Building partial temporary database …” message while it rebuilds only the parts of the database necessary to find all the blocks associated with the version of the files you’ve select for restore.

If you haven’t tried a “Direct restore from backup files” process yet, I’d suggest you test it out on a small file - just to see how long it actually takes. For me, a 700M full source restore took about 7 min. but a single .bat file took about 30 seconds.

1 Like

Aaaah! That’s pretty much exactly what I was looking for. Much easier than what I had thought and although slower than Crashplan, it works fine. Thanks! :+1:

No, it will likely take months, if it even work at all.

I think it depends on size too. I recreated a quite-small production backup in under 3 minutes now. A tiny test backup took under 1 minute. There’s no survey of big ones, but some reportedly get VERY slow… There’s a theory (plausible to me because it’s common in computing) that time is more than linear with size. There’s another that reducing effective size can be done using larger units. For example see here:

Choosing sizes in Duplicati discusses tradeoffs of, for example, a –blocksize bigger than 100KB default which, if nothing else, results in a lot of book-keeping work if the size of the backup makes lots of blocks. Not saying Duplicati out-of-the-box handles huge backups well, but disagreeing that it never handles any.