Im running backup that have increase in source 50GB.
On system drive (where there is 20GB free) dup- files (temps) are generated in temp folder.
Backup fails on “not enough space” on system drive (filled with temps).
Is there a way to say. Hey duplicati, generate 5GB files to be uploaded - upload - delete temps - generate new 5GB and continue?
I tried to set synchronous-upload to true, but it keeps happening.
Maybe it is working this way, just step “delete temps” is not happening because of:
Well, as usual it’s not as simple as first described.
But before I try to talk my way out of this, I have to ask if you REALLY want 1G upload files. Using that Upload Volune Size means if you want to restore a 100kb file, you’ll need to download AT LEAST one 1G dblock file (maybe two)…
As far as actual temp usage, here’s an example…
Before the encrypted .aes file can be created a .zip version is made. So the “active” file actually takes double the size for a short while (in your case 2G) on top of the already completed .zip.aes files (in your case 3G). So that’s 5G.
And I’m not positive about it, but depending on settings and available memory it’s possible the uncompressed contents of the 1G zip file (each of the block files) might need to exist.
Assuming I’m not wrong about that (which I very well could be) that would mean another likely 2-5x more than dblock (Upload Volume) size (remember, this is before being compressed) for the active file. But that’s just guessing.
Honestly, it looks from the error like it’s the final “make a .aes version of the .zip” step that’s causing the space issue so my guess is if you lower the asynchronous-upload-limit even just to 3 you’ll get a good run.
Of course I’d more strongly recommend a saner dblock size instead…
I see it’s in DoCompact(). I wonder if –no-auto-compact would help some for a test to see if it stops disk filling?
If you keep an Information level log, you might see lines that show what it downloaded. One of my old runs did BackendEvent of 15 Get, some Put, 26 Delete, then 7 Get, some Put, 18 Delete. The compact was said to be:
2018-09-19 13:01:12 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there are 21 small volumes and the maximum is 20
2018-09-19 13:03:56 -04 - [Information-Duplicati.Library.Main.Operation.CompactHandler-CompactResults]: Downloaded 22 file(s) with a total size of 89.78 MB, deleted 44 file(s) with a total size of 89.91 MB, and compacted to 4 file(s) with a size of 65.00 MB, which reduced storage by 40 file(s) and 24.91 MB
I picked this compact over those that just had fully deletable volume(s) in the hope of seeing more downloads.
Although it’s possible to mix dblock file sizes on the backend, if all yours are 1 GB your disk will fill up rapidly…
Yes, 1GB files are on purpose. That specific backup are virtual machines, so few 20-60GB files (together 200GB). There will be never need to restore something, only everything to keep virtual machine consistent.
Destination is on LAN, so no need to consider upload/download.
Assuming you’re going to keep compact turned on, there are some tuning knobs to play with, but possibly the most reliable solution is to give the compact more space somewhere locally attached, or maybe even on LAN. –tempdir will let you point to it, and maybe that will solve not only the “compact” temporaries, but any others…