Just to add to this discussion - I’ve also just noticed this on 22.214.171.124. I have never noticed the use of the /tmp folder before, and only noticed it today because it filled up. Duplicati seems to be creating lots of dup-* files of my chunk size (50MB) and not deleting them. I had to wipe them all out even though a backup job was in progress.
Should these files be deleted as the backup progresses, or will they build up first? When I had to wipe them all, there was 16GB worth of 50MB dup-* files in /tmp so this really isn’t sustainable as my backup size is much larger than /tmp.
I’ve just turned on auto-cleanup, but the backup job is still in progress so I presume I’ll need to wait for it to finish and then see if it takes effect on the next run?
It won’t. Well, it won’t do what we want. I forgot that --auto-cleanup cleans up DESTINATION files, not the local stuff.
Unless you really want to help us with testing on 126.96.36.199 I’d suggest downgrading to 188.8.131.52 which should solve the problem going forward (though it won’t clean up any already existing dup-* temp files).
Very happy to help with some testing - I’m waiting for a backup to run right now - the backup does seem to be running correctly and transferring the files, so I’m giving it a 10 minute window to transfer the files, and then the following command is running every few minutes through cron to clean up /tmp and prevent a disk full situation (sadly my test system has /tmp as part of the root filesystem so it’s not pretty when it fills up).
Can confirm the issue under Debian stretch.
Temp-Files seem not to be deleted, no matter which async file limit is set.
Result: tmp folder is completely filled compromising the whole system operation …
Since starting this thread the same backup is still running (since 21/5). Backing up to Google Drive seems to be taking a loooong time. Looking at the temp files, it’s creating a 100MB chunk on average about 11 minutes apart. CPU and network utilisation on the server is minimal. The network picks up for a minute or so every 11 minutes (I’m assuming when it transfers the 100MB chunk to Google). So, not sure why it’s running so slow. As a result, I’ve not been able to test any of the other suggestions thus far. To keep the server from running out of space, I’ve scripted a batch file to automatically delete any of the created temp files that are older than 6 hours.
If I’m forced to reboot the server for any reason before the current backup completes, will Duplicati know to pickup where it left off ?
Yep. Duplicati does most of it’s work in “atomic” chunks (meaning it will either completely finish it’s current step or will start it over the next time it is run).
Kudos to you for sticking it out over a 24 day (and going) backup run!
That being said, we definitely need to figure out why it’s so slow for you. My GUESS is it’s the database access in 184.108.40.206 and you’ll find downgrading to 220.127.116.11 (if that’s an option) will provide you with much better performance (and self-deleting temp files).
Yes, I can confirm that the temp files are still being left behind in 18.104.22.168. Backup speeds to Google Drive appears to be fixed. However, that also means the Temp folder fills up a LOT faster now !
I upgraded over the 22.214.171.124 installation and now in addition to the temp file issue, the backup will no longer run if I set snapshot-policy to require. Logs says Duplicati cannot access the drives (eg. E:\ and F:) which are selected for backups. Removing snapshot policy allows the backup to run but then obviously no VSS.