Need help resuming local database repair

I’m not sure if this will run forever, without limit. A log file may be more reliable, but profiling log is big.
Regardless, I guess we don’t know anything about where it ended (e.g. was it close or very far off?).

If you wish to try, DB Browser for SQLite could inspect the database, maybe comparing the known
(by inspection) number of dindex files at the remote against how many got into IndexBlockLink table.

The default 100 KB blocksize is too small for that big a backup. but there’s no auto-tune or config aid.
2 MB would have made fewer blocks to track. Some DB operations grow too slow with many blocks.
Unfortunately one can’t change blocksize on an existing backup. Too much code requires it constant.

Nobody’s figured out the scaling-with-size slowdown, but I’ve been trying to get an experiment up here.
You’re probably not in a measuring mood at the moment, but there were some tools watching the files.

I think not, but I don’t think anybody here can give absolute answer. There’s certainly no specific option.

So if you like, you can browse it, or if it’s in-place now, post a link to a database bug report along with the destination files counts so that maybe someone else can figure out if it got anywhere near end of create.

I suppose you can give whatever you got a bit of a sanity check with the Verify files button which will download a few files and integrity-check the database. To maximize message catching, set up live log at Warning level or higher, and also see if any info is in job log or server log at About → Show log → Stored.