Hi!
Yes, there should be no huge difference in performance if designs are close enough. And again - let me say that this testing is not fully conclusive yet - I need to run similar tests on larger data source.
This level of compression impact is strange - the was plenty of CPU headroom. Testing is done on i5-3570 3.40GHz and I haven’t seen single core pegged and compression should utilize hyper-threading pretty well.
For backups, --zip-compression-level=1
is not bad actually if performance is a concern. And if backing up already compressed files, it may be actually recommended.
The behavior of --synchronous-upload="true"
was a surprise for me as well and I remember looking at disk I/O chart and seeing better utilization when it was set to true. I am pretty sure I haven’t screwed up testing, but will re-test again.
I have some ideas on what profiling I can do on this, but first I want to try similar tests on a much bigger set of very large files - 15Gb with Windows Server 2016 iso, some .Net app memory dump Virtual Box vmdk file with CentOs and couple of rar files… Largest file is 5.5Gb, most files are non-compressible.
Will update this thread with the results.