"database or disk is full" cause?

Let me start by saying this is all regarding a ABUSE TEST scenario, so there’s no need to actually “solve” this. I’m just curious.

In 2.0.2.1 beta I created a local backup of a single 15G VM disk image file with a zip-compression level of 1 and a ridiculously low block size of 15KB with the goal of checking performance of JUST high block counts.

I expected to end up with about 308 dblock files associate with just over 1 million blocks, but instead the job died after about an hour with the “database or disk full” message.

Both my main and destination drives have over 90G free (each) so I don’t see how the disk could be full, and since sqlite (v3) can handle up to 18,446,744,073,709,551,616 rows in up to a 140TB database I’m confused as to what caused the error message.

My plan du jour is to pick a smaller file (maybe only 8G) and try again, just to see if at what point it breaks. :smiling_imp:

Did you capture a stack trace or similar that explains where the “disk full” message originates?

Not specifically - I just have what’s in the logs.

I’d be happy to run the job again if you can tell me how to get the stack trace you want…

Oh, and not that it matters so much as it might be expected in a failure scenario but the UI (not restarted since the failure) does not reflect the actual backup space used. (UI shows 200k from a test run vs. 1.5G of actual archive files.)

Never mind - apparently I’m an idiot and forgot about the --dbpath drive. :blush:

1 Like