Trying to backup about 1.5 TB of files to GoogleDrive. However, the initial backup never completes as the C: drive keeps running out of disk space. I have found that Duplicati creates its temporary files before uploading them to GoogleDrive in C:\Windows\temp as a series of files with names like dup-xxxxxxx. I have testing with and without --asynchronous-upload-limit=4. The chunk size I’ve set is 100MB. So, after running a while, the C:\Windows\Temp folder gets filled with lots of 100MB dup-xxxxxxx files.
I thought Duplicati deletes the dup-xxxxxx files once they are uploaded. Additionally, by settings the --asynchronous-upload-limit=4 option, I thought Duplicati would create at most 4 of those files only and delete / create them as needed. At the moment, it creates and uploads those files but never seem to delete them so the hard disk is slowly filling up again (the first time it completely filled the C: drive and crashed - I could not recover the backup so I started again).
Am I missing an option somewhere or is this a bug ?
You are correct that once no longer in use, those files should be getting deleted.
Are they all from the same run (date?) or do they go back over previous backup runs?
Oh, and have you always had the issue or is it new since starting with 2.0.3.6?
Edit:
If you just want to get going and not diagnose the underlying cause you could try setting –auto-cleanup=true.
--auto-cleanup
If a backup is interrupted there will likely be partial files present on the backend. Using this flag, Duplicati will automatically remove such files when encountered.
Default value: “false”
Just to add to this discussion - I’ve also just noticed this on 2.0.3.6. I have never noticed the use of the /tmp folder before, and only noticed it today because it filled up. Duplicati seems to be creating lots of dup-* files of my chunk size (50MB) and not deleting them. I had to wipe them all out even though a backup job was in progress.
Should these files be deleted as the backup progresses, or will they build up first? When I had to wipe them all, there was 16GB worth of 50MB dup-* files in /tmp so this really isn’t sustainable as my backup size is much larger than /tmp.
I’ve just turned on auto-cleanup, but the backup job is still in progress so I presume I’ll need to wait for it to finish and then see if it takes effect on the next run?
It won’t. Well, it won’t do what we want. I forgot that --auto-cleanup cleans up DESTINATION files, not the local stuff.
Unless you really want to help us with testing on 2.0.3.6 I’d suggest downgrading to 2.0.3.5 which should solve the problem going forward (though it won’t clean up any already existing dup-* temp files).
I just had an idea, if you’re willing to do a test run…
If so, please try setting --concurrency-max-threads=1 (and maybe --concurrency-block-hashers=1 and --concurrency-compressors=) to essentially disable the multiple threads running.
If the temp files do get cleaned up up then that could help confirm the source of that issue is the multi-threading code.
Very happy to help with some testing - I’m waiting for a backup to run right now - the backup does seem to be running correctly and transferring the files, so I’m giving it a 10 minute window to transfer the files, and then the following command is running every few minutes through cron to clean up /tmp and prevent a disk full situation (sadly my test system has /tmp as part of the root filesystem so it’s not pretty when it fills up).
The /tmp filesystem is filling up with dup-* files as before. Does this help with debugging the issue? Please let me know what else I can do to support in fixing this
Can confirm the issue under Debian stretch.
Temp-Files seem not to be deleted, no matter which async file limit is set.
Result: tmp folder is completely filled compromising the whole system operation …
I’m going to flag this topic as a bug (likely introduced with, but not as a part of, the multi-threading) and see if @kenkendk or @Pectojin have any thoughts on it.
Since starting this thread the same backup is still running (since 21/5). Backing up to Google Drive seems to be taking a loooong time. Looking at the temp files, it’s creating a 100MB chunk on average about 11 minutes apart. CPU and network utilisation on the server is minimal. The network picks up for a minute or so every 11 minutes (I’m assuming when it transfers the 100MB chunk to Google). So, not sure why it’s running so slow. As a result, I’ve not been able to test any of the other suggestions thus far. To keep the server from running out of space, I’ve scripted a batch file to automatically delete any of the created temp files that are older than 6 hours.
If I’m forced to reboot the server for any reason before the current backup completes, will Duplicati know to pickup where it left off ?
Yep. Duplicati does most of it’s work in “atomic” chunks (meaning it will either completely finish it’s current step or will start it over the next time it is run).
Kudos to you for sticking it out over a 24 day (and going) backup run!
That being said, we definitely need to figure out why it’s so slow for you. My GUESS is it’s the database access in 2.0.3.6 and you’ll find downgrading to 2.0.3.5 (if that’s an option) will provide you with much better performance (and self-deleting temp files).
Any updates on this problem yet. downgrading is not a option, and having to manually remove files from the temp dir is unreliable as if it fills up can and will crash and bring down the system.
I am running on unRAID via a docker and it works well, there is a update, but it will downgrade to the beta version and then because db is v5 with canary, beta will not connect because its v4
Yes, I can confirm that the temp files are still being left behind in 2.0.3.7. Backup speeds to Google Drive appears to be fixed. However, that also means the Temp folder fills up a LOT faster now !
I upgraded over the 2.0.3.6 installation and now in addition to the temp file issue, the backup will no longer run if I set snapshot-policy to require. Logs says Duplicati cannot access the drives (eg. E:\ and F:) which are selected for backups. Removing snapshot policy allows the backup to run but then obviously no VSS.