If i make a GUI backup to local storage (USB Drive), move that USB drive to a different computer, use cli commands to create a database (I’ve had issues copying DBs - different OSs perhaps?) and then use cli to restore the full backup on the 2nd computer. Everything restores, no errors. Can you have a better backup verification than that?
Or do I even need to recreate the DB? Just do a direct restore from the backup files
I asked because I’ve had many problems trying to reconcile GUI test with cli test , whether I copy the DB from source to local storage or rebuild the DB locally. (running beta 22.214.171.124)
I like to run the verifies locally, which does me no good if the cli test throws lots of ‘extra files’ messages that I then have to try and verify on customer PCs. Instead, I’m doing a full restore on all of my systems, staggered throughout the month. A couple of 1 TB SSDs with a sata/usb cable works just fine. They are large enough to hold about a week’s worth of full restores, then I wipe them.
More fuel for the fire.
Create a backup on windows 10 to linux running nextcloud/webdav
on windows, run commandline test all - no errors
copy windows DB to linux
on linux box, run cli test all - errors: a bunch of “extra unknown files”.
if I delete the db on linux, and recreate it on linux and repeat the same test - all verifies.
if I try to repair the existing db (created and moved from windows), sometimes it works and sometimes it doesn’t. So far (5 client tests), recreating the DB on the linux side yields the most consistent results.
The DBs are definitely different on windows and linux and maybe that’s expected and normal, and I just needed to be hit with a bigger DOH! brick. Different OS’s implement SQL differently (but just close enough to frustrate the lesser among us) - could that be the entire problem? If so, then copying DBs between machines is a waste of time. Just rebuild the DB every time and then your local ‘test’ works like expected and matches the results done on the client.
The down side though, is that it takes longer and longer to rebuild your DB as your backup increases in size. I believe in one of the postings, there is a canary release that addresses rebuild speed - and since I’m only doing this to implement a verification, I will give that a try.
Another reason the databases are different between Linux and Windows is because of the path differences - both folder names and directory separator character.
When you are recreating the database, go to About → Show Log → Live → Verbose and watch to see if it’s downloading dblocks. Under normal conditions it should only need to download dindex and dlist files. If it’s downloading dblocks it will take a lot longer. Using the latest Canary is part of the solution to that problem, but not the whole picture. I can get into that in more depth if you confirm dblocks are being downloaded and you want to fix the issue.
I made a post here about it - please proceed with caution:
You MAY see the problem again in the future if you are running 126.96.36.199. I haven’t had the issue come back on my machines where I run more recent Canary versions. (In general I don’t recommend people use Canary, but in my opinion 188.8.131.52 is more stable and better than 184.108.40.206.)