Verifying Backend Data

We’re getting a little bit off topic here, but I’d recommend looking at this post:

And here Kenkendk says in THEORY a block size of up to 2GB should be supported (though there’s no mention on whether or not that’s a good idea). :wink:

Overall I’d say a jump from a --blocksize of 100KB to 100MB should work just fine, but might be a bit drastic. At the default 100KB you’re looking at 10.7 million block-hash rows per 1 TB of source data.

Shifting to a more modest 1MB block size would take that down to just over 1 million block-hash rows. That won’t necessarily return a 10x smaller sqlite file and 10x faster performance, but should show a fair bit if improvement…