I’m about a week in to using Duplicati to backup 46GB across two backup jobs, with the backup files uploaded to Tardigrade/StorJ. I noticed yesterday that the number of bytes written to disk by the “mono-sgen64” process is really high. My concern is that this might unnecessarily shorten the life of my laptop SSD, and, it just doesn’t seem necessary to write so much to disk when only KBs are being uploaded per run.
Based on when the process started, I’m averaging 12.5GB/day written to local disk (as reported by macOS Activity monitor).
In the Duplicati settings, I enabled “synchronous-upload”, thinking that this would prevent writing to the temporary folder. The process is still writing about 1GB /hour to local disk.
The two sqlite DBs for these backup jobs are 656MB and 192MB, so I would guess it isn’t database access that’s increasing the bytes written.
Is this normal? Any ideas on what I can do to reduce this number?
Thank you!
My default options are:
- concurrency-block-hashers: 1
- concurrency-compressors: 1
- concurrency-max-threads: 1
- synchronous-upload: checked
- thread-priority: idle
Some backup-specific options:
- Remote volume size: 64MB
- asynchronous-concurrent-upload-limit: 1
- auto-cleanup: checked
- backup-test-samples: 0
- list-verify-uploads: checked
- no-auto-compact: checked
- zip-compression-method: LZMA
- Backups are configured to run hourly
- Encryption is turned off