Restore as fast+easy as possible (on new machine)... with db? How?

I think it depends on size too. I recreated a quite-small production backup in under 3 minutes now. A tiny test backup took under 1 minute. There’s no survey of big ones, but some reportedly get VERY slow… There’s a theory (plausible to me because it’s common in computing) that time is more than linear with size. There’s another that reducing effective size can be done using larger units. For example see here:

Choosing sizes in Duplicati discusses tradeoffs of, for example, a –blocksize bigger than 100KB default which, if nothing else, results in a lot of book-keeping work if the size of the backup makes lots of blocks. Not saying Duplicati out-of-the-box handles huge backups well, but disagreeing that it never handles any.