Filling up /tmp/ dir

I’ve been noticing that the /tmp/ folder is being used by duplicati to store sertain files. This is totally alright with me, as long as it is used as intended. Temporary.
This however is not the case. The /tmp/ folder seems to fill up with files, and not clear out ever. Since my backup task runs without issue, I haven’t done anything with it until now. My root folder is filling up and half of it is being used by duplicati’s /tmp dir. Since we are talking about my backups, I did not want to remove any files in this folder without checking with you guys/galls first.

Files in the folder:

root@4a07aeba560e:/# du -sh /tmp/*
940K /tmp/dup-4c9d01a5-cb86-4577-9d68-817166a28176
5.6G /tmp/dup-50d38874-fe19-40bf-9799-b8f3b098439e
17M /tmp/dup-564b25b0-865e-4d1d-8a68-ccaffab4046d
4.0K /tmp/dup-66127b14-1907-474a-a390-a00f9aef98d1
6.9G /tmp/dup-66f1d011-38d4-4c68-b999-b9dac05f9b9c
11M /tmp/dup-6d13cea7-eb1d-4911-abe2-b67ab2278f45
15M /tmp/dup-781dd4e2-2660-4bc8-8fe2-8f2cd5a78bcf
21M /tmp/dup-7d052771-c473-4111-8ddb-3d8751cca1d2
2.2M /tmp/dup-85c3aa64-2776-4828-bb4d-c215350fa09d
4.0K /tmp/dup-87453ba7-f6e6-4dac-8859-8eacbbb5fcec
2.2M /tmp/dup-9bed772a-58eb-46df-8546-ef307d6bb427
6.9G /tmp/dup-a1a8de99-c219-48eb-8f1c-e76c03ec6b30
3.0M /tmp/dup-a2140c04-c672-4e99-a01a-e97383d096df
4.2M /tmp/dup-c8b31475-a5c5-4f17-a986-c103a9615205
1.8G /tmp/dup-d4ab76ad-db39-40d7-a3e3-aee512005585
2.3M /tmp/dup-d808aba9-5380-4434-a8c9-df5e8855f85d
11G /tmp/dup-de142df8-9820-41e2-b6b8-45b9102e0d1a
8.9G /tmp/dup-e3c5da09-7186-4d77-9aeb-c90585f67c6f
11G /tmp/dup-f2851c4b-5ae7-4b10-867b-76e107feb42b
11G /tmp/dup-f875723e-8083-456f-99f0-509a67584c88
4.0K /tmp/dupl-usagereport-240-20190520141547.json
4.0K /tmp/dupl-usagereport-240-20190531000500.json
4.0K /tmp/dupl-usagereport-240-20190531174837.json
4.0K /tmp/dupl-usagereport-240-20190611175543.json
4.0K /tmp/dupl-usagereport-242-20190611175255.json
0 /tmp/HttpServer

My backup task is set to backup to /backups, and this is it’s contents

[leviathan kclijsters]# du -sh /share/Backup/*
9.7M /share/Backup/duplicati-20190227T121419Z.dlist.zip.aes
17M /share/Backup/duplicati-20190411T114903Z.dlist.zip.aes
23M /share/Backup/duplicati-20190519T000000Z.dlist.zip.aes
24M /share/Backup/duplicati-20190526T000002Z.dlist.zip.aes
25M /share/Backup/duplicati-20190602T000002Z.dlist.zip.aes
26M /share/Backup/duplicati-20190608T000001Z.dlist.zip.aes
26M /share/Backup/duplicati-20190609T000001Z.dlist.zip.aes
26M /share/Backup/duplicati-20190610T000001Z.dlist.zip.aes
26M /share/Backup/duplicati-20190611T000001Z.dlist.zip.aes
26M /share/Backup/duplicati-20190613T000001Z.dlist.zip.aes
26M /share/Backup/duplicati-20190614T000002Z.dlist.zip.aes
11G /share/Backup/duplicati-b02ed6716ca5a4de88af874a97aac7177.dblock.zip.aes
631M /share/Backup/duplicati-b134a3488d7bc417e828981f11c95ee3d.dblock.zip.aes
542M /share/Backup/duplicati-b1b82d15450414471a5542ad00f70de3e.dblock.zip.aes
490M /share/Backup/duplicati-b3efc7e3784b949239d59520ee50c4704.dblock.zip.aes
460M /share/Backup/duplicati-b41b7f077ecc74561bdc93bc6248850a7.dblock.zip.aes
504M /share/Backup/duplicati-b4287a8a35972487a964f2f1a91262639.dblock.zip.aes
11G /share/Backup/duplicati-b709e06af95734753b9b163bd112a34ce.dblock.zip.aes
8.9G /share/Backup/duplicati-badf4f3ba10fa4d988251dfb7f97c905a.dblock.zip.aes
923M /share/Backup/duplicati-bbff059f8eda6462db24ff2bf6e955551.dblock.zip.aes
4.9G /share/Backup/duplicati-bc6e8561dc24f4d95a46788523cfeb5b7.dblock.zip.aes
591M /share/Backup/duplicati-bec3a2332d4764cb9a4c35a4af8658737.dblock.zip.aes
519M /share/Backup/duplicati-bf5e7d3ee2e8e444cbda71c57ef6922ab.dblock.zip.aes
3.9G /share/Backup/duplicati-bfcb2fa073b624b098cfc07a00e5da9d4.dblock.zip.aes
2.0M /share/Backup/duplicati-i2dc9df74d81c432b9e344a27d3c7d6b8.dindex.zip.aes
1.5M /share/Backup/duplicati-i481e11438c6b4566a18cdbb102f5abb3.dindex.zip.aes
2.6M /share/Backup/duplicati-i6b481d6d4324495e87ae34fe008dce8f.dindex.zip.aes
6.6M /share/Backup/duplicati-i6cd6fe5c108b4bf8b2228b827eec00ec.dindex.zip.aes
15M /share/Backup/duplicati-i78b71c59d0614fe7a516fc8c3648f1cb.dindex.zip.aes
1.6M /share/Backup/duplicati-i7cefe20f458a47d6bfc5bf58a95856b5.dindex.zip.aes
1.5M /share/Backup/duplicati-i93086fa623be453ab8fda7d9c282eade.dindex.zip.aes
14M /share/Backup/duplicati-i9721b5ab45734541a5d6761bcd20bdce.dindex.zip.aes
17M /share/Backup/duplicati-i9e0d473380624f33a4c978cf7822943a.dindex.zip.aes
1.6M /share/Backup/duplicati-ia6cb5986e1d5406f86586bd692fb9d18.dindex.zip.aes
1.5M /share/Backup/duplicati-ic28ebf7c2a094baf8f3a7ce824070433.dindex.zip.aes
9.7M /share/Backup/duplicati-ida2f40d170fc4efe9b2c49915e7e25ca.dindex.zip.aes
1.6M /share/Backup/duplicati-idb991ba40c304b56898af8d1a3ad1af8.dindex.zip.aes

I looked around on the forums, and didn’t find issues like this one… Some people have had issues with the /tmp dir, but not that it keeps filling up without removing any old files in it.

Appears this issue was fixed in February:

But there hasn’t been a new Beta release since that fix. Hopefully a new beta version will be released soon as it’s been over 6 months now!

1 Like

I would mark it as solution, but it does not solve my issue if the patch isn’t in production use yet.

1 Like

Yep. I manually clean up the temp folder every week or two. Looking forward to the next beta release.

Horrible “Me Too”

I’m experimenting with reducing the chunk size to approx 50% of /tmp to see if this helps at all.

Wanted to document my experience in case someone else comes across this problem. I added an extra 70Gb to a backup of a small Fedora server and it kept failing near the end with /tmp being full.

On my system the /tmpfs was only 180MB so I increased it temporarily to 500MB and it worked, using the command mount -o remount,size=500M,noatime /tmp

It’s only temporarily increased by doing it this way so it might be useful to get over a problematic backup, then I need to wait for the next run after I can reboot the server to see if the remaining backups continue to fail. If they do I will make the change permanent or see if I can move /tmp.

The Canary fix mentioned above should be in 2.0.5.1 Beta and may help.
If I read it correctly, the problem could arise on compact at end of backup.

Unlikely as I’m on 2.0.5.101_canary_2020-01-23 and the files are getting deleted, it just filled up during this one backup. Also, as the server has been rebooted a few times this week, the /tmp would have been emptied anyway.

You mean 180MB/MiB (bytes) right, not 180Mb (bits)? Even if you meant MB, 180 is probably too small. By default remove volumes are 50MiB in size and I believe Duplicati does 4 in parallel by default (check its concurrency settings).

Can you clarify? Are you saying 2.0.5.1 deletes them except for one case? Do you know where the fill happened, e.g. was after the upload phase when a compact might have run? If so, maybe fix didn’t fix, however if it was in the middle of the backup, then it might just be a small /tmp plus queuing as set by –asynchronous-upload-limit = 4. If near end of backup (but not yet in delete processing), it might have been final operations where partially filled dblock files are gathered together into more complete ones.

Yeah, MB, corrected my post. It’s normally automatic based on free RAM, so I’d need to add it to fstab as a proper mount in order to keep it at 500MB between boots.

The deletions have been under control for some time, it only happened once I had added this large folder to the backup job. Here’s the log:

Failed: Disk full. Path /tmp/dup-e60644ac-01b3-4d47-8bbf-9bac5c6b572f
Details: System.IO.IOException: Disk full. Path /tmp/dup-e60644ac-01b3-4d47-8bbf-9bac5c6b572f
  at System.IO.FileStream.FlushBuffer () [0x00090] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.FileStream.WriteInternal (System.Byte[] src, System.Int32 offset, System.Int32 count) [0x000d5] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.FileStream.Write (System.Byte[] array, System.Int32 offset, System.Int32 count) [0x000a5] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.StreamWriter.Flush (System.Boolean flushStream, System.Boolean flushEncoder) [0x00098] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.StreamWriter.Write (System.String value) [0x0002b] in <76c3dbc427f049499d13c500c18191dc>:0 
  at Newtonsoft.Json.Utilities.JavaScriptUtils.WriteEscapedJavaScriptString (System.IO.TextWriter writer, System.String s, System.Char delimiter, System.Boolean appendDelimiters, System.Boolean[] charEscapeFlags, Newtonsoft.Json.StringEscapeHandling stringEscapeHandling, Newtonsoft.Json.IArrayPool`1[T] bufferPool, System.Char[]& writeBuffer) [0x00024] in <d47de75a7e3f422ca4ca64a654c80495>:0 
  at Newtonsoft.Json.JsonTextWriter.WriteEscapedString (System.String value, System.Boolean quote) [0x00020] in <d47de75a7e3f422ca4ca64a654c80495>:0 
  at Newtonsoft.Json.JsonTextWriter.WritePropertyName (System.String name) [0x00007] in <d47de75a7e3f422ca4ca64a654c80495>:0 
  at Duplicati.Library.Main.Volumes.FilesetVolumeWriter.WriteMetaProperties (System.String metahash, System.Int64 metasize, System.String metablockhash, System.Collections.Generic.IEnumerable`1[T] metablocklisthashes) [0x0001c] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Volumes.FilesetVolumeWriter.AddFileEntry (Duplicati.Library.Main.FilelistEntryType type, System.String name, System.String filehash, System.Int64 size, System.DateTime lastmodified, System.String metahash, System.Int64 metasize, System.String metablockhash, System.String blockhash, System.Int64 blocksize, System.Collections.Generic.IEnumerable`1[T] blocklisthashes, System.Collections.Generic.IEnumerable`1[T] metablocklisthashes) [0x000bd] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Volumes.FilesetVolumeWriter.AddFile (System.String name, System.String filehash, System.Int64 size, System.DateTime lastmodified, System.String metahash, System.Int64 metasize, System.String metablockhash, System.String blockhash, System.Int64 blocksize, System.Collections.Generic.IEnumerable`1[T] blocklisthashes, System.Collections.Generic.IEnumerable`1[T] metablocklisthashes) [0x00000] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Database.LocalDatabase.WriteFileset (Duplicati.Library.Main.Volumes.FilesetVolumeWriter filesetvolume, System.Int64 filesetId, System.Data.IDbTransaction transaction) [0x00259] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass30_0.<WriteFilesetAsync>b__0 () [0x00000] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass3_0.<RunOnMain>b__0 () [0x00000] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Common.SingleRunner.DoRunOnMain[T] (System.Func`1[TResult] method) [0x000b0] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Backup.UploadRealFilelist+<>c__DisplayClass1_0.<Run>b__0 (<>f__AnonymousType6`1[<Output>j__TPar] self) [0x0033f] in <45f00521e1e84c2c83156b62530c732c>:0 
  at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func`2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01033] in <45f00521e1e84c2c83156b62530c732c>:0 
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0011c] in <45f00521e1e84c2c83156b62530c732c>:0 

Log data:
2020-02-07 16:31:55 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.IO.IOException: Disk full. Path /tmp/dup-e60644ac-01b3-4d47-8bbf-9bac5c6b572f
  at System.IO.FileStream.FlushBuffer () [0x00090] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.FileStream.WriteInternal (System.Byte[] src, System.Int32 offset, System.Int32 count) [0x000d5] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.FileStream.Write (System.Byte[] array, System.Int32 offset, System.Int32 count) [0x000a5] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.StreamWriter.Flush (System.Boolean flushStream, System.Boolean flushEncoder) [0x00098] in <76c3dbc427f049499d13c500c18191dc>:0 
  at System.IO.StreamWriter.Write (System.String value) [0x0002b] in <76c3dbc427f049499d13c500c18191dc>:0 
  at Newtonsoft.Json.Utilities.JavaScriptUtils.WriteEscapedJavaScriptString (System.IO.TextWriter writer, System.String s, System.Char delimiter, System.Boolean appendDelimiters, System.Boolean[] charEscapeFlags, Newtonsoft.Json.StringEscapeHandling stringEscapeHandling, Newtonsoft.Json.IArrayPool`1[T] bufferPool, System.Char[]& writeBuffer) [0x00024] in <d47de75a7e3f422ca4ca64a654c80495>:0 
  at Newtonsoft.Json.JsonTextWriter.WriteEscapedString (System.String value, System.Boolean quote) [0x00020] in <d47de75a7e3f422ca4ca64a654c80495>:0 
  at Newtonsoft.Json.JsonTextWriter.WritePropertyName (System.String name) [0x00007] in <d47de75a7e3f422ca4ca64a654c80495>:0 
  at Duplicati.Library.Main.Volumes.FilesetVolumeWriter.WriteMetaProperties (System.String metahash, System.Int64 metasize, System.String metablockhash, System.Collections.Generic.IEnumerable`1[T] metablocklisthashes) [0x0001c] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Volumes.FilesetVolumeWriter.AddFileEntry (Duplicati.Library.Main.FilelistEntryType type, System.String name, System.String filehash, System.Int64 size, System.DateTime lastmodified, System.String metahash, System.Int64 metasize, System.String metablockhash, System.String blockhash, System.Int64 blocksize, System.Collections.Generic.IEnumerable`1[T] blocklisthashes, System.Collections.Generic.IEnumerable`1[T] metablocklisthashes) [0x000bd] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Volumes.FilesetVolumeWriter.AddFile (System.String name, System.String filehash, System.Int64 size, System.DateTime lastmodified, System.String metahash, System.Int64 metasize, System.String metablockhash, System.String blockhash, System.Int64 blocksize, System.Collections.Generic.IEnumerable`1[T] blocklisthashes, System.Collections.Generic.IEnumerable`1[T] metablocklisthashes) [0x00000] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Database.LocalDatabase.WriteFileset (Duplicati.Library.Main.Volumes.FilesetVolumeWriter filesetvolume, System.Int64 filesetId, System.Data.IDbTransaction transaction) [0x00259] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Backup.BackupDatabase+<>c__DisplayClass30_0.<WriteFilesetAsync>b__0 () [0x00000] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Common.SingleRunner+<>c__DisplayClass3_0.<RunOnMain>b__0 () [0x00000] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Common.SingleRunner.DoRunOnMain[T] (System.Func`1[TResult] method) [0x000b0] in <45f00521e1e84c2c83156b62530c732c>:0 
  at Duplicati.Library.Main.Operation.Backup.UploadRealFilelist+<>c__DisplayClass1_0.<Run>b__0 (<>f__AnonymousType6`1[<Output>j__TPar] self) [0x0033f] in <45f00521e1e84c2c83156b62530c732c>:0 
  at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func`2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00c1d] in <45f00521e1e84c2c83156b62530c732c>:0

Does large mean number of files, size of files, or both? The log surprisingly looks like it ran out of space while making the dlist file with info on the paths in that backup. Many files (or deep paths) might do that, however the dblock file (at default 50 MB) is often larger than dlist file. Large folder might change that…

It was 70GB of files, 20 or so folders, over 100,000 files.

Taking a guess about path length, that might have added 25 or 30 MB to whatever used to be there, however I’d have thought you’d get a more persistent problem, assuming folder remained in backup.

Beyond that, maybe it was just some unfortunate alignment of things that needed some more space.

The same backup tried to run today and failed with

Failed: SQLite error
cannot commit - no transaction is active

Repair failed so doing a recreate, however as I rebooted the server and the tmpfs reduced back to the original file, I’m seeing it fill up again so I suspect I’ll need to increase it permanently. May have to redo the recreation if it does. It didn’t fail because of the space as I watched it when I first attempted to re-run the backup.