Fixing "Insertion failed because the database is full database or disk is full"

Hello Duplicati Community,

I have been running Duplicati in a Docker container on my NAS server for quite a while now. One thing that I did not like about the setup, however, is that when Duplicati was creating the blocks in the temporary directory (/tmp) within my container, the block files were being temporarily written to the NAS server’s SSD, thus resulting in unnecessary wear on the SSD’s cells. To fix this, I recently reconfigured my Duplicati container to mount a tmpfs filesystem in /tmp.

For those who are unaware, a tmpfs filesystem is a volatile filesystem that exists only in RAM. If the computer (or in this case, the container) is shut down, the contents of the filesystem are lost. It is truly temporary, and of course, much faster since there’s no writing to the disk involved (unless paging to the disk happens, that is).

The size of the tmpfs filesystem that I am using is 1GB, which I figured should be more than enough, however, after making the change, I have started encountering an issue where backups fail with the error, “Insertion failed because the database is full database or disk is full.” This is definitely not an issue with the disk on which the database file is stored, as there is more than enough free space. I’m sure it’s happening because of the size limit of the tmpfs filesystem, because the issue does not occur if I remove it.

Does anyone know how I can fix this? The most obvious solution of course is to make the tmpfs fillesystem be larger than 1GB, but I also cannot make it very large since it is a RAM-based filesystem and making it too large would result in disk paging anyway. Ideally what I would like to do is mount the tmpfs filesystem in another location and configure Duplicati to use it for creating the block files but not use it for the temporary SQLite files. Can this be done?

I should note that the amount of data in the backup that had this error is just under 7TB. I have a second backup that is 1.5TB that succeeded without hitting this error. Both of these backups are totally new.

This doesn’t seem to fix the problem. The issue still persists even when I change the location of the tmpfs file system and reconfigure my backups to use that alternative location.

Please clarify. You mounted it elsewhere? If so, what was left at /tmp?

Please clarify whether you did this with TMPDIR variable or --tempdir.
The manual explains the difference. Did change move any temp files?
Duplicati’s direct ones tend to begin with dup. I’m not sure of SQLite’s,
however I think some of them have a .etilqs suffix. Their docs page:

Temporary Files Used By SQLite

Basically, what files can you see where, and what does df see filling?

Please clarify how consistently and at what point, e.g. status bar state.

Where is Docker from? What sort of OS is NAS on?

I tried remounting the tmpfs filesystem at /tmp/Duplicati and then reconfiguring my backups to all use /tmp/Duplicati as the temporary directory…

I tried setting --tempdir to /tmp/Duplicati, but didn’t do anything with TMPDIR since it should already be /tmp by default from what I understand.

When I tried retesting, df did show the disk filling, but when I checked the contents of the disk, all I saw were some temporary Duplicati files with sizes that when summed up, were nowhere closed to 1GB. I checked for hidden files with ls -a, but there was nothing.

It happens every time, and always right at the end of the backup when Duplicati is reporting that it is deleting old files.

I use the official Duplicati Docker image: Docker

I use a Linux distro intended for running a NAS called OpenMediaVault.

Problem solved! The backup seems to work fine if I increase the size of the tmpfs mount to 2GB. I still do not understand why /tmp consistently has an “invisible” file in it at the end of the backup process though.

I’m glad you got it working fine. There are some remaining questions which you can look into if you wish to.

So if it’s still the case that that’s /tmp/Duplicati and you use --tempdir but not TMPDIR, that should be mostly Duplicati temporary files, not SQLite (which seems odd given the error). On Options screen 5, do you have Remote volume size increased past its default 50 MB, or dblock-size or asynchronous-upload-limit set up? You should be able to watch dup-* files moving through there as the backup progresses and its files upload. Additionally, you should be able to keep a good eye on space usage there with df, as it’s got only one usage. You can keep a good eye on the backup’s uploads from tempdir at About → Show log → Live → Information.

What filesystem is /tmp in now (e.g. per df), and is current setup showing that, or is this a historical guess? Invisible files (not the same as ones that begin with a dot) are ones that are still open but have been deleted. There’s no contradiction there on Linux. Files aren’t actually deleted until last user closes. Until then they are just fine from the using program’s point of view. From a filesystem point of view, they still occupy space on it. Finding these can be done from lsof output, looking for files which are marked as (deleted) in result listing.

When you say “at the end of the backup process”, do you mean in final stages, or at actual end or beyond?
Programs can’t hold files open after they’re gone. If you think Duplicati held files, try shutting down Duplicati.

That explains it! I was not aware that this is how it worked. I was able to confirm by watching another backup that this is what is happening. It’s some database-related stuff that grows to about 1.25GB in size during the final stages of the backup.

When I said this, I was referring to the final stages of the process.

Did you see that in lsof output? What sort of name did it have? SQLite files shouldn’t be able to follow the --tempdir setting to fill up your /tmp/Duplicati which was your tmpfs, but I still await hearing what’s where.

Temporary File Storage Locations from www.sqlite.org explains file placement if those files are SQLite’s.

I really don’t remember the specifics, but they had an etilqs extension, which apparently are known to be sqlite temp files. I also saw in the documentation that sqlite temp files should be generated in the default temporary directory regardless of what you set the backup temp directory to be, but clearly that wasn’t happening for me. Anyway, since 2GB seems to work, I’ve reconfigured the backups to once again use the default temp directory as well.

I think the manual predates a 2018 code change in Clean up tempdir environment vars #3266.
You can see the old note on SQLite TMPDIR being removed. The latest Canary help text says

Duplicati.CommandLine.exe help tempdir
  --tempdir (Path): Temporary storage folder
    This option can be used to supply an alternative folder for temporary
    storage. By default the system default temporary folder is used. Note
    that also SQLite will put temporary files in this temporary folder.

Above is consistent with your findings, so that clears up how tempdir surprisingly got followed.

Ohh well that is interesting! The documentation is in need of updating, then. I think making this change to Duplicati’s behavior makes sense, but it would be kind of nice if the option existed to disable this behavior for a backup configuration.

Would like to have a separate sqlite temporary directory - my sqlite databases themselves are approaching my RAM size at 12 GB, making insert and vacuuming/cleaning fail. I still want to use my RAM-backed temp folder for 50 MB x number of current uploads

Sounds like a combination of support for this implicit feature request, and a request for workaround.
Either way is hard. The forum is not an Issues tracker, but has a Features category (this is Support).
Feature requests far outnumber developer volunteers. That’s a huge limitation. Help wanted here…

Are you able to see actual files that are filling the ramdisk? I hope the permanent database isn’t there, meaning I’m not sure why an SQL INSERT (is that what you mean?) should go there. Vacuum should.

What sort of cleaning is that? Can you post stacks or messages on any or all of the above “fail” uses?

Some possible ideas:

asynchronous-upload-folder might take a little pressure off of RAM temp folder. I’m not sure how much.

Temporary File Storage Locations suggests that deprecated PRAGMA temp_store_directory may work.
DB Browser for SQLite might be able to do it on whatever database is getting so large as to be trouble.

Reduce the database size by splitting backup, reducing versions through retention, backing up less, or
increasing blocksize which unfortunately needs a backup fresh start. Try not to have a blocksize giving more than a few million blocks, otherwise SQL operations become slow, and database size gets large.