Let me start by saying this is all regarding a ABUSE TEST scenario, so there’s no need to actually “solve” this. I’m just curious.
In 2.0.2.1 beta I created a local backup of a single 15G VM disk image file with a zip-compression level of 1 and a ridiculously low block size of 15KB with the goal of checking performance of JUST high block counts.
I expected to end up with about 308 dblock files associate with just over 1 million blocks, but instead the job died after about an hour with the “database or disk full” message.
Both my main and destination drives have over 90G free (each) so I don’t see how the disk could be full, and since sqlite (v3) can handle up to 18,446,744,073,709,551,616 rows in up to a 140TB database I’m confused as to what caused the error message.
My plan du jour is to pick a smaller file (maybe only 8G) and try again, just to see if at what point it breaks.