Recreating database logic/understanding/issue/slow

There’a a per-file hash taken at backup time that’s compared at restore time, so it should be verified fine.

I lost track of things. So if this was a full Recreate rather than a “direct restore”, it “should” show multiple versions (I’m glad it got all the way out of Temp before things went bad). A direct restore is single-version which should in theory be faster, but also is single-use even for that restore, and won’t do more backups.

Continuing to use the database will see it get a bunch of self-checks primarily before and after a backup. Some of these are internal DB consistency checks, while a few will do some checks of the backup files. There are numerically just a few on-demand DB test tools. The REPAIR command isn’t something to run routinely, as it attempts to synchronize the database and remote. This backfires spectacularly if the local was restored from something like an old DB backup, as it finds unknown remote files, thus deletes them. Your database “should” be super-fresh but I’m not 100% sure how sane it is. Protect your backed up files (which might be hard without a copy or folder permission change) and your hard-earned database (copy). The LIST-BROKEN-FILES command is supposed to be safer, as it’s informational, for a sister command.

There may be some risk of hidden problems in older versions of either the database or the remote being discovered when compact runs and gathers various partially-filled files of various ages into new dblocks. Being most conservative would probably start again, but keep the old for history, but that has drawbacks. You can also do test restores of as many old versions as you like, and you could run the test command, which samples (including a large sample of all) the remote files to test their integrity versus DB’s view, which in your case “should” match really well after the recent Recreate. This check also runs slightly on every backup, and can be told to pick a larger test sample by setting –backup-test-samples as desired.

Over time, databases can become less efficient in space usage. See forum discussions about VACUUM.

I found two reports of the Azure problem specifically, and one as a side note. None have been figured out.

Test with –full-remote-verification Throws an Error and Hangs
Backups not running/error out after update to 2.0.4.5_beta_2018-11-28:
Metadata was reported as not changed, but still requires being added?