Current thinking on speeding up Duplicati

I use Duplicati to backup from a FreeNAS Jail to a Synology NAS (The synology is 1Gb)
I am getting about 50MB/s. I suppose that’s not bad
The FreeNAS is running a Xeon E5 2660V3 and 128GB of memory on a 10Gb network
Backup location is local

One of the backup jobs is about 10TB and mostly un-compressible files. The other jobs are smaller and don’t matter so much how long they take. I am in the process of the initial backup task which is (not surprisingly) taking a long time. Are there any ways I can speed this up?

I suppose I could have turned off compression
I did change some settings:
Remote volume size to 1GB
Blocksize to 1GB

With a 1 Gbps network you can achieve a bit over 100 MB/sec at best. So 50 MB/sec isn’t too bad at all considering that Duplicati still does a lot of processing. There is no way to disable the chunking, deduplication, and repackaging of chunks into volumes. Encryption and compression add even more overhead so disabling those might help. Also keep in mind some Duplicati operations are single threaded so that may also be a factor limiting your performance.

Once the initial backup is complete your later backups will be much faster.

I kinda thought that would be the result.
Question. I have started this 10TB backup - and I have just changed / added the global option that says do not compress a list of file extensions.

Will this option apply from now on to all backups so that future / new .zip files will no longer be compressed by duplicati or will this only apply to new jobs