Out of space on /tmp/, 74 gb free

If that’s with df and fairly steady (so as to not miss temporary fullness) as in original post, it’s odd…
Fixing “Insertion failed because the database is full database or disk is full” would be worth reading.

I suppose you could try filling /tmp with a big dd from /dev/zero to see if it can hold what it claims.
Tmpfs (kernel.org) bad config may deadlock, if you use it – you originally spoke about tmps in fstab.

Do you have enough real drive space to see if moving tempdir away from /tmp solves space error?
This is especially important if /tmp is tmpfs now, although I can’t find any df oddity documented…
I did test it with a deleted-but-still-open file, and it wasn’t fooled by that. Free space remained down.

dd if=/dev/zero bs=1024 count=1000 of=fill; tail -f fill in one window
df -k . and rm fill in another. Control-C the tail to see the free space increase
I did this on my /tmp which seems to be on the regular VM drive space, same as / is.

EDIT:

Was there an actual error message or stack trace (even better) posted here to see how it got to that?
Possibly you’ll have to look in About → Show log → Stored (and click line) to get needed information.

EDIT 2:

How are you sure that it’s size? Other things run only occasionally, such as the compact (as needed).
This is in the regular job log, but log-file=<path> and log-file-log-level=information gives a better view.
That would also make it easier to see when in the process it fails. Do you have any descriptions now?

It’s also odd that you use so much /tmp space. Do you see more 100 MB files in there than expected?
This ignores any invisible files, but if you want you could probably see those with lsof per other topic.

1 Like