Mono CPU utilization is absurdly high


#1

I’m noticing on a modern Mac running 10.10.5 that mono-sgen64 (which was only installed for Duplicati) is routinely taking 120-150% of CPU

That’s not good - can anything being done about this?


#2

What settings are you using for your backup? (Feel free to just paste an “export as command-line” result with personal data like passwords, ids, and hashes changed.)


#3

I am seing the same thing. I thought it was due to mono installed on my machine for C# dev with Unity. I’ve uninstalled and reinstalled mono and have not opened Unity since a fresh restart.

Mac OS up to date.

Just installed Duplicati yesterday. CPU on mono-sgen64 seems to flux between 100-170% CPU.

Any thoughts?


#4

Same here on MacOS High Sierra. Maybe related to file compression routines used by Duplicati?


#5

If you don’t mind doing a test you could try adding --zip-compression-level=0 to your job (or a test job) and see how the mono-sgen64 CPU usage looks…

--zip-compression-level
This option controls the compression level used. A setting of zero gives no compression, and a setting of 9 gives maximum compression.
Default value: “BestCompression”


#6

Hi Jon, with a test with zip excluded as per your command line, the load fluctuated between 8% and 115%. The average was about 60%. This is on a Core i3 dual processor. The upload was 5 á 8 MB/sec. I have attached a graph by Activity Monitor.


#7

And with the normal job today (update bi-weekly 300 GB+ data of which ~ 15 GB may have changed) so with zip and encryption on, the load was not very different:
load fluctuated between 5% and 120%, on avg about 50%. So it maybe that the 170% load is only happening on the initial run?


#8

That could be the case as the initial run requires 100% of stuff to be backed up (lots more database writes) while, unless you have a lot of changes to files, later runs are still processing the same amount of source data but most of it comes up as database reads and generally there’s no need to re-block / re-hash files with no changes.

So based on your tests it sounds like compression likely isn’t the issue which leaves things like these that happen a lot on initial backups and backups with many file changes:

  • sqlite writes
  • hashing
  • encryption
  • file transfers
  • local source IO (reading complete new / changed files for blocki and hashing purposes vs. just meta data to determine no changes)