Temp space filled, but no actual files in /tmp

Recently I had to fully rebuild one of my backups database (about 700 GB, sftp backend inside my LAN), but it failed mid process due to disk full. I’m using a 2 GB tmpfs RAM disk as temporary, so it’s not that difficult to fill it up, but then I re-ran the database rebuild, watched closely /tmp directory, and, while files were being created and deleted normally, available space (according to df /tmp) was slowly going down, even if actual files in /tmp weren’t but a few MB.

After some Google, I found that this weird scenario can happen when some application deletes a file, but another one (or the same) is still using it. Executing “lsof -p DUPLICATI_PID | grep deleted”, I see that there was a deleted, temporary file in /tmp, owned by duplicati, that, despite being deleted, was growing up continously as database rebuild was going on, thus filling up all temp space. Since it has a temporary name I don’t think it might be useful at all, but here’s the output of lsof:

mono-sgen 7205 root   13u      REG   0,47      4640       27 /tmp/etilqs_1JehsB8quFmBjKI (deleted)
mono-sgen 7205 root   14u      REG   0,47 656350208       34 /tmp/etilqs_UHa7Doxud4bAb3w (deleted)
mono-sgen 7205 root   16u      REG   0,47     12336       17 /tmp/etilqs_wMKauD4CBNSTiyE (deleted)

I’m using experimental inside a Docker container. Any idea about what can this “deleted yet still open and being written” file be? Can I do something else that might help identifying what’s happening here?

Thanks!

Google etilqs :wink:

That’s sqlite, spelled backwards, and there’s an interesting story in source.

I’m not an SQLite internals expert, but from Temp Databases and some more Google work, I think you could possibly copy out the open file with the cat command using /proc/pid/fd/#. Then maybe an exam could be done using DB Browser for SQLite (for browsing) or SQLite Analyzer for table size statistics.

Google for "CREATE TEMPORARY TABLE" site:github.com/duplicati/duplicati/blob/master noticing that Duplicati does this a lot, so the trick is to find which took big space and if it was necessary. Possibly some temporary tables could have been been deleted sooner, and I assume that frees space. EDIT: possibly not immediately though, but it makes free pages that later tables can use for their needs. Also possible is that the tables are required, so I’m not immediately willing to say making them is a bug.

Meanwhile –tempdir has a note on TMPDIR if you need to get SQLite to put temporary filese elsewhere.

As for why the files are deleted, do you know if that was really fast, or if something like tmpwatch runs? UNIX-style programs sometimes delete files after opening to be sure the OS cleans up if program dies.

1 Like

What version of Duplicati are you running?

I wonder if this is related to the bug where temp files weren’t being deleted properly on Windows. Unlike Linux, on Windows you can’t delete a file that is open. Maybe the root cause is that the file locks aren’t always released before Duplicati attempts to delete the temp file.

Didn’t know about this etilqs thing, it seems file names aren’t that useless after all :slight_smile:

So it seems a SQLite problem… if it’s actually a problem. I was surprised because I was closely watching /tmp and I hadn’t noticed this files, so it seems the case where a file gets created and immediately deleted. I don’t think it’s something left behind, because it’s continuously growing as the database is being rebuilt. I’m not familiar with Duplicati’s database structure, but it may perfectly be a temporary file for some huge table that stores whatever is in Duplicati’s database. It’s a little bit weird that full database data is written twice in different places (once in temp and another wherever sqlite file is), but it may be legit behaviour.

It’s also weird that this never happened before, but it’s been a while since I rebuilt a database, and data set was smaller last time I did. I ended up solving the problem by letting Duplicati’s docker container use its own /tmp (on disk) insted of using system /tmp (on RAM).

Thanks!

I’m using experimental (2.0.4.21). Funny thing is I updated from beta because in beta there’s a (different) nasty bug where compacting leaves temporary files behind, thus filling temp space pretty quickly.

I don’t think this file’s is left behind, because it’s continously growing, so Duplicati’s definitely doing something with it. It looks like more like a not very efficient use of temporary sqlite tables or something, because it seems like the full database is stored in tmp, instead of only the needed bits.

Thanks!