Very slow database recreation

If it’s in SQLite the question (whose answer isn’t clear) is how many of the 12 cores SQLite can use.
If it can’t use many, then “pegged” would look like about 8% usage on Task Manager for all the cores.

Raising blocksize as mentioned earlier helps for large backups. Is 1 TB the destination size or source?
SQL tweaking has been attempted in at least some places before, but we need experts. Any out there?

Initial backup stuck at 100% was slowness of initial backup due to query plan doing SCAN not SEARCH which was fixable by running ANALYZE. Maybe Recreate has the same slowness inserting all its blocks, however if you follow the thread to its GitHub PR, the proposal of ANALYZE for initial backup got rejected. Seemingly we’re wanting other paths to speed, but I’m not sure who’ll do it. Any volunteer SQL experts?

Meanwhile, I have some tentative benchmarks showing slowdowns at about a million blocks, and it’s a pretty hard wall (at size cubed). I used tiny blocksize to test this (don’t have TBs). Anyone care to test?

The BlocksetEntry table points fo the blocks of a file (known as a blockset). I’m guessing your status bar hasn’t gotten to the 90% point where the pain is mostly downloading dblock files in search of lost blocks.
Issues that cause this search are slowly being identified. Many were fixed in 2.0.5.1, but there are more.

To know for sure where you are, you can look at About → Show log → Live → Verbose for progress…
Profiling logs would, of course, be a superset of that. Look for lines with “Processing” and current/totals.