Duplicati temp dup-xxxx files not being deleted

Hi,

Currently running 2.0.3.6_canary_2018-04-23.

Trying to backup about 1.5 TB of files to GoogleDrive. However, the initial backup never completes as the C: drive keeps running out of disk space. I have found that Duplicati creates its temporary files before uploading them to GoogleDrive in C:\Windows\temp as a series of files with names like dup-xxxxxxx. I have testing with and without --asynchronous-upload-limit=4. The chunk size I’ve set is 100MB. So, after running a while, the C:\Windows\Temp folder gets filled with lots of 100MB dup-xxxxxxx files.

I thought Duplicati deletes the dup-xxxxxx files once they are uploaded. Additionally, by settings the --asynchronous-upload-limit=4 option, I thought Duplicati would create at most 4 of those files only and delete / create them as needed. At the moment, it creates and uploads those files but never seem to delete them so the hard disk is slowly filling up again (the first time it completely filled the C: drive and crashed - I could not recover the backup so I started again).

Am I missing an option somewhere or is this a bug ?

Thanks.

1 Like

You are correct that once no longer in use, those files should be getting deleted.

Are they all from the same run (date?) or do they go back over previous backup runs?

Oh, and have you always had the issue or is it new since starting with 2.0.3.6?


Edit:
If you just want to get going and not diagnose the underlying cause you could try setting –auto-cleanup=true.

--auto-cleanup
If a backup is interrupted there will likely be partial files present on the backend. Using this flag, Duplicati will automatically remove such files when encountered.
Default value: “false”

Just to add to this discussion - I’ve also just noticed this on 2.0.3.6. I have never noticed the use of the /tmp folder before, and only noticed it today because it filled up. Duplicati seems to be creating lots of dup-* files of my chunk size (50MB) and not deleting them. I had to wipe them all out even though a backup job was in progress.

Should these files be deleted as the backup progresses, or will they build up first? When I had to wipe them all, there was 16GB worth of 50MB dup-* files in /tmp so this really isn’t sustainable as my backup size is much larger than /tmp.

I’ve just turned on auto-cleanup, but the backup job is still in progress so I presume I’ll need to wait for it to finish and then see if it takes effect on the next run?

Many thanks,

James

It won’t. Well, it won’t do what we want. I forgot that --auto-cleanup cleans up DESTINATION files, not the local stuff.

Unless you really want to help us with testing on 2.0.3.6 I’d suggest downgrading to 2.0.3.5 which should solve the problem going forward (though it won’t clean up any already existing dup-* temp files).

I just had an idea, if you’re willing to do a test run…

If so, please try setting --concurrency-max-threads=1 (and maybe --concurrency-block-hashers=1 and --concurrency-compressors=) to essentially disable the multiple threads running.

If the temp files do get cleaned up up then that could help confirm the source of that issue is the multi-threading code.

Very happy to help with some testing - I’m waiting for a backup to run right now - the backup does seem to be running correctly and transferring the files, so I’m giving it a 10 minute window to transfer the files, and then the following command is running every few minutes through cron to clean up /tmp and prevent a disk full situation (sadly my test system has /tmp as part of the root filesystem so it’s not pretty when it fills up).

find /tmp -maxdepth 1 -mmin +10 -type f -name dup-\* -exec rm -f '{}' \;

Once this backup completes, I’ll add those flags to the backup config and let you know the results.

1 Like

I too am experiencing this issue.

Linux Mint 18.3, Duplicati 2.0.3.6_canary_2018-04-23.

I experimented by disabling all of threading by setting the three concurrency options to 1 and unfortunately the problem persists.

I am backing up to a local folder on a USB HDD and my settings are:

asynchronous-upload-limit=1
auto-cleanup=on
auto-vacuum=on
blocksize=1MB
dblock-size=1GB
disable-on-battery=on
retention-policy=30D:0s,3M:1D,1Y:1M,U:1Y
verbose=on
concurrency-max-threads=1
concurrency-block-hashers=1
concurrency-compressors=1

Any thoughts?

Just in the process of testing here - have added the following to my configuration:

concurrency-max-threads=1 
concurrency-block-hashers=1
concurrency-compressors=1

The /tmp filesystem is filling up with dup-* files as before. Does this help with debugging the issue? Please let me know what else I can do to support in fixing this :slight_smile:

For testing purposes I changed my “dblock-size” to 100MB and still get the same unwanted behaviour.

Hi together!

Can confirm the issue under Debian stretch.
Temp-Files seem not to be deleted, no matter which async file limit is set.
Result: tmp folder is completely filled compromising the whole system operation …

Thank-you (and to the others that have posted)!

I’m going to flag this topic as a bug (likely introduced with, but not as a part of, the multi-threading) and see if @kenkendk or @Pectojin have any thoughts on it.

Hi All,

Since starting this thread the same backup is still running (since 21/5). Backing up to Google Drive seems to be taking a loooong time. Looking at the temp files, it’s creating a 100MB chunk on average about 11 minutes apart. CPU and network utilisation on the server is minimal. The network picks up for a minute or so every 11 minutes (I’m assuming when it transfers the 100MB chunk to Google). So, not sure why it’s running so slow. As a result, I’ve not been able to test any of the other suggestions thus far. To keep the server from running out of space, I’ve scripted a batch file to automatically delete any of the created temp files that are older than 6 hours.

If I’m forced to reboot the server for any reason before the current backup completes, will Duplicati know to pickup where it left off ?

Thanks.

Yep. Duplicati does most of it’s work in “atomic” chunks (meaning it will either completely finish it’s current step or will start it over the next time it is run).

Kudos to you for sticking it out over a 24 day (and going) backup run! :clap:

That being said, we definitely need to figure out why it’s so slow for you. My GUESS is it’s the database access in 2.0.3.6 and you’ll find downgrading to 2.0.3.5 (if that’s an option) will provide you with much better performance (and self-deleting temp files).

Any updates on this problem yet. downgrading is not a option, and having to manually remove files from the temp dir is unreliable as if it fills up can and will crash and bring down the system.

Thanks
Myk

I believe a new version is “close” but I don’t know exactly it will include and whether it will canary, experimental, or beta.

I am running on unRAID via a docker and it works well, there is a update, but it will downgrade to the beta version and then because db is v5 with canary, beta will not connect because its v4 :frowning:

Release: 2.0.3.7 (canary) 2018-06-17

still leaving temp files behind

Yes, I can confirm that the temp files are still being left behind in 2.0.3.7. Backup speeds to Google Drive appears to be fixed. However, that also means the Temp folder fills up a LOT faster now !

I upgraded over the 2.0.3.6 installation and now in addition to the temp file issue, the backup will no longer run if I set snapshot-policy to require. Logs says Duplicati cannot access the drives (eg. E:\ and F:) which are selected for backups. Removing snapshot policy allows the backup to run but then obviously no VSS.

@kenkendk or @Pectojin, do you know if this has been looked into at all yet?

@PhoenixAlpha, I have broken your VSS issue out into it’s own topic (hope you don’t mind).

This issue had slipped off my list. Looking at it now.