My primary backup system is Xeon E5405 8 cores 2GHz, 32GB RAM. It rarely uses more than 50% of any single core, rarely uses more than one core with Duplicati, and has very little else running. OS (Server 2012) and Duplicati only use 1.5-2 GB of RAM at any given time. Backups are read from network drives (they are SATA3 SSDs in RAID connected via 10Gb network connection), and backups are stored locally on SATA3 platter drives (RAID5). We have 5 different backup process setup for different times, varying from 80GB to 700GB worth of files.
Once you figure the slowest point in this equation is the 100-120Mbps Read/Write of the local drives, that becomes the limiting factor. Even then there is plenty of processing and verifying of the files, database, backups, etc.
Initial backup always takes the longest. With 700GB+ on one network drive with some larger files but mostly medium to smaller files, it took 5 full days for the initial scan, read, catalog, and backup of those files. Now the daily backups for this set of files takes around 20-30 minutes since there are few file changes.
Then another backup that has 200GB worth of a lot of smaller files, and a lot of changed files, new files, removed files, etc, that backup takes over an hour per day to complete.
So there are a lot of factors that come in to play. I plan to but have not tested it on my newer system at home: Ryzen5 2600, SSD with OS (~420Mbps r/w), NVMe drive for storage (2100-2400 Mbps r/w), standard platter SATA3 drive for storage (80-110 Mbps r/w).