Limit dup- files generated count during backup

Im running backup that have increase in source 50GB.
On system drive (where there is 20GB free) dup- files (temps) are generated in temp folder.
Backup fails on “not enough space” on system drive (filled with temps).

Is there a way to say. Hey duplicati, generate 5GB files to be uploaded - upload - delete temps - generate new 5GB and continue?

I tried to set synchronous-upload to true, but it keeps happening.

Maybe it is working this way, just step “delete temps” is not happening because of:

Thx

Temp files being prepped for upload can be limited with --asynchronous-upload-limit=x
(where x defaults to 4).

So if your dblock (Upload Volume Size) is set to 100MB and you set asynchronous-upload-limit=3 then only 300MB of temp storage will be used for pending uploads.

This does NOT affect other temp files that might be used for the sqlite database or testing / decompression of downloaded files during the verification step.

Note that there was a bug (around versions ~2.0.3.6 -v2.0.3.8) that caused temp files to not be cleaned up.

Ok, how do you explain this? :slight_smile:
latest version 3.11
–asynchronous-upload-limit - no set so default 4
volume size 1 GB
C: free space before backup start 35GB

After run this exception:

Details: System.IO.IOException: There is not enough space on the disk.

at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte buffer, Int32 offset, Int32 count)
at SharpAESCrypt.Threading.DirectStreamLink.write(Byte buffer, Int32 offset, Int32 count)
at System.Security.Cryptography.CryptoStream.Write(Byte buffer, Int32 offset, Int32 count)
at SharpAESCrypt.SharpAESCrypt.Write(Byte buffer, Int32 offset, Int32 count)
at Duplicati.Library.Utility.Utility.CopyStream(Stream source, Stream target, Boolean tryRewindSource, Byte buf)
at Duplicati.Library.Encryption.EncryptionBase.Encrypt(Stream input, Stream output)
at Duplicati.Library.Encryption.EncryptionBase.Encrypt(String inputfile, String outputfile)
at Duplicati.Library.Main.BackendManager.FileEntryItem.Encrypt(IEncryption encryption, IBackendWriter stat)
at Duplicati.Library.Main.BackendManager.Put(VolumeWriterBase item, IndexVolumeWriter indexfile, Boolean synchronous)
at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
at Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
at Duplicati.Library.Main.Controller.<>c__DisplayClass13_0.b__0(BackupResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)

Log data:
2018-09-25 02:51:24 +02 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.IO.IOException: There is not enough space on the disk.

at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte buffer, Int32 offset, Int32 count)
at SharpAESCrypt.Threading.DirectStreamLink.write(Byte buffer, Int32 offset, Int32 count)
at System.Security.Cryptography.CryptoStream.Write(Byte buffer, Int32 offset, Int32 count)
at SharpAESCrypt.SharpAESCrypt.Write(Byte buffer, Int32 offset, Int32 count)
at Duplicati.Library.Utility.Utility.CopyStream(Stream source, Stream target, Boolean tryRewindSource, Byte buf)
at Duplicati.Library.Encryption.EncryptionBase.Encrypt(Stream input, Stream output)
at Duplicati.Library.Encryption.EncryptionBase.Encrypt(String inputfile, String outputfile)
at Duplicati.Library.Main.BackendManager.FileEntryItem.Encrypt(IEncryption encryption, IBackendWriter stat)
at Duplicati.Library.Main.BackendManager.Put(VolumeWriterBase item, IndexVolumeWriter indexfile, Boolean synchronous)
at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
at Duplicati.Library.Main.Operation.BackupHandler.d__19.MoveNext()

Well, as usual it’s not as simple as first described. :wink:

But before I try to talk my way out of this, I have to ask if you REALLY want 1G upload files. Using that Upload Volune Size means if you want to restore a 100kb file, you’ll need to download AT LEAST one 1G dblock file (maybe two)…

As far as actual temp usage, here’s an example…

Before the encrypted .aes file can be created a .zip version is made. So the “active” file actually takes double the size for a short while (in your case 2G) on top of the already completed .zip.aes files (in your case 3G). So that’s 5G.

And I’m not positive about it, but depending on settings and available memory it’s possible the uncompressed contents of the 1G zip file (each of the block files) might need to exist.

Assuming I’m not wrong about that (which I very well could be) that would mean another likely 2-5x more than dblock (Upload Volume) size (remember, this is before being compressed) for the active file. But that’s just guessing.

Honestly, it looks from the error like it’s the final “make a .aes version of the .zip” step that’s causing the space issue so my guess is if you lower the asynchronous-upload-limit even just to 3 you’ll get a good run.

Of course I’d more strongly recommend a saner dblock size instead… :slight_smile:

I see it’s in DoCompact(). I wonder if –no-auto-compact would help some for a test to see if it stops disk filling?

If you keep an Information level log, you might see lines that show what it downloaded. One of my old runs did BackendEvent of 15 Get, some Put, 26 Delete, then 7 Get, some Put, 18 Delete. The compact was said to be:

2018-09-19 13:01:12 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there are 21 small volumes and the maximum is 20
...
2018-09-19 13:03:56 -04 - [Information-Duplicati.Library.Main.Operation.CompactHandler-CompactResults]: Downloaded 22 file(s) with a total size of 89.78 MB, deleted 44 file(s) with a total size of 89.91 MB, and compacted to 4 file(s) with a size of 65.00 MB, which reduced storage by 40 file(s) and 24.91 MB

I picked this compact over those that just had fully deletable volume(s) in the hope of seeing more downloads.

Although it’s possible to mix dblock file sizes on the backend, if all yours are 1 GB your disk will fill up rapidly…

Choosing sizes in Duplicati offers advice on dblock sizes,

Yes, 1GB files are on purpose. That specific backup are virtual machines, so few 20-60GB files (together 200GB). There will be never need to restore something, only everything to keep virtual machine consistent.
Destination is on LAN, so no need to consider upload/download.

You are right, it is happening during compact which is running after backup. With 600GB size of backup choosing sizes smaller than 1GB means thousands of files in destination :frowning:

Assuming you’re going to keep compact turned on, there are some tuning knobs to play with, but possibly the most reliable solution is to give the compact more space somewhere locally attached, or maybe even on LAN.
–tempdir will let you point to it, and maybe that will solve not only the “compact” temporaries, but any others…

Thx. For now I will try to use no-autocompact to see how much will destination grow.