Limit the size of temporary files stored in local disk

Hello guys,
I have account at cloud.mail.ru. I’m access the files in it via their application called “Disk O”. Using this app in Windows I can mount my cloud drive as network attached device (with corresponding letter in My Computer), so from Duplicati perspective, this looks like local drive.

Hence, I used local drive as destination path of my backup job. So far so good, except the fact, when I start my backup Duplicati creates too many files in my local disk, and this causes low disk space or even disk critical and the backup gets stopped, because it cannot create other files. In meantime the Disk O app, detects there is a pending files which have to be uploaded and do its job, but the upload speed is not so fast.

So my question is, how I can limit the size or number temporary created files?
I tried by using these things:

  • asynchronous-concurrent-upload-limit - set to 5 (and my block size is set to 1GB, so I expect not more 5 gigs to be used… unfortunately this didn’t solved the issue)
  • throttle-upload - In this way, I just wanted to tell Duplicati that my destination is too slow, and it takes time to upload the files… unfortunately, this didn’t work as well, it started with 500k then go to 10MB for example, so throttling is not working as I expect

Can you give me any other ideas?
Thanks in advance

Confusingly similar name to asynchronous-concurrent-upload-limit, but you likely were after:

  --asynchronous-upload-limit (Integer): The number of volumes to create ahead
    of time
    When performing asynchronous uploads, Duplicati will create volumes that
    can be uploaded. To prevent Duplicati from generating too many volumes,
    this option limits the number of pending uploads. Set to zero to disable
    the limit
    * default value: 4

Is there a reason to increase remote volume size by factor of 20 from 50MB default? That could hurt you regardless of whether disk fills. On a slow link, you’ll be spending lots of time uploading and downloading.

Can you clarify this? Are you trying different settings? Where? Or are you measuring network use? How?
If you’re measuring something not a constant speed, it can happen, but average speed should be correct.

Great,
I’ve set --asynchronous-upload-limit to 1, and everything is fine now. The disk is not overused.

About the volume size, I have set my volume size to 1G because the cloud provider limit the max file size to 1.5G, so I’ve decided to keep this setting below 1.5G.

With the network bandwidth shaping I thought that with this approach I can “limit” the local used size, by telling Duplicati that my backend is able to transfer with this network speed.

Nevertheless, all good now.
Thanks for the help!

But you are right about increasing the default block size. So now, if small changes are made I’m required to upload another 1G of data, and this will consume another 1G of space, right?

I wasn’t aware of that setting - default block size is set to 50MB.
So most probably I have to revert it back to defaults. But to do that, I have to delete all the stored data into destination and start another full backup from scratch, right?

No. It’s a maximum not a minimum, but what happens is that a changed file gets its new blocks spread around among multiple dblock files, and a restore of one file might require numerous dblock downloads.

The “Remote volume size” on screen 5 has it too, and it’s usually how people set (or mis-set) this value.

Well, you can’t change blocksize later, but you can change dblock size any time, yet rearranging won’t happen until compact runs and decides that there’s too much wasted space relative to current setting. Going to large setting tends to trigger compact. I suspct going to small will wait a long time to compact.

So if you’re intent on getting to small remote volume size, the from-scratch backup will do that for sure, otherwise you’d have to wait until the used (non-wasted) space in the big volume got low (likely slowly).

Choosing sizes in Duplicati
How the backup process works
How the restore process works

Aha, understood.
Thanks for good clarification @ts678

I will have a look on that articles you’ve linked.
Thanks a lot!