I’m seeing a similar issue. See my post here: Verifying Backend Data for more detail.
My gut is that this is a problem for a large backup set (3.5TB of data) that was backed up using small (50MB) chunks. The result is a sqlite DB that is over 8GB and likely with a huge number of entries that end up in that join.
The plan for the query looks like:
sqlite> EXPLAIN QUERY PLAN SELECT "A"."Hash", "C"."Hash" FROM (SELECT "BlocklistHash"."BlocksetID", "Block"."Hash", * FROM "BlocklistHash","Block" WHERE "BlocklistHash"."Hash" = "Block"."Hash" AND "Block"."VolumeID" = ?) A, "BlocksetEntry" B, "Block" C WHERE "B"."BlocksetID" = "A"."BlocksetID" AND "B"."Index" >= ("A"."Index" * 3200) AND "B"."Index" < (("A"."Index" + 1) * 3200) AND "C"."ID" = "B"."BlockID" ORDER BY "A"."BlocksetID", "B"."Index";
sele order from deta
---- ------------- ---- ----
0 0 0 SCAN TABLE BlocklistHash USING INDEX BlocklistHashBlocksetIDIndex
0 1 2 SEARCH TABLE BlocksetEntry AS B USING PRIMARY KEY (BlocksetID=? AND Index>? AND Index<?)
0 2 3 SEARCH TABLE Block AS C USING INTEGER PRIMARY KEY (rowid=?)
0 3 1 SEARCH TABLE Block USING INDEX Block_IndexByVolumeId (VolumeID=?)
0 0 0 USE TEMP B-TREE FOR RIGHT PART OF ORDER BY