Asynchronous-upload-limit not working

Thanks for including that @barthaare !

To be more specific:

  • --asynchronous-upload-limit: The number of concurrent uploads
  • --concurrency-compressors: The number of concurrent zip files

The logic is that Duplicati will create one temporary zip file for each --concurrency-compressors and all these are “temporary files” that will gradually get filled during backup.

Once a zip file reaches the size limit it is passed on to one of the --asynchronous-upload-limit uploaders. During transfer the temporary file stays on disk, and only when it is completed will it be removed.

Because the temporary file is “passed” from the compressor to the uploader, it is not taking up a compressor, giving the equation you found:

temporary-files = 
  asynchronous-concurrent-upload-limit + concurrency-compressors

If you want to see this effect, you can also set --asynchronous-upload-folder to a different folder, so you can monitor the temporary files that are being built separate from the ones that are being uploaded.

There is an option that is supposed to fix this: --synchronous-upload. The intention of this option is to not progress from compression until the file is fully uploaded, so value for --concurrency-compressors is defining the total number of temporary files.

Unfortunately, this option is not currently working due to an “optimization” that attempts to increase upload throughput. It will hopefully be fixed soon, with a rewrite of the uploader system.

1 Like