Duplicati writes files to a temporary directory before uploading them to a backend, and similarly when doing file verifies, downloads the files to disk.
This results in an enormous amount of extraneous, unnecessary writes. During an initial backup, if you have 400GB of data on a 500GB SSD, of which 300GB is user data - it will cause three complete write cycles across the cells in the entire remaining free space (assuming your drive does not employ a multi-tier cache.) Every blockfile you verify post-backup is also written to disk. The less free disk space you have, the worse the wear will be.
Edit: just for the sake of completeness: this applies to drives which implement dynamic wear leveling, but not drives which implement static wear leveling. Static wear leveling re-locates data on flash cells which have low write cycles (even those in use), specifically to avoid this problem. As the industry goes to great lengths to hide from consumers what kinds of wear leveling a particular model or family of drives utilizes, and how much over-provisioning is used, it would be beyond unwise to assume your drive has static wear leveling. Further, do not assume that because the controller chip in your drive is capable of static wear leveling, that the manufacturer of your drive has it enabled.)
On mechanical drives, it’s mostly just going to slow things down.
On my Mac, I created a ramdisk as such:
diskutil erasevolume HFS+ 'ramdisk' `hdiutil attach -nomount ram://8388608`
Note the single and back quotes. This creates a RAM disk mounted at /Volumes/ramdisk that is ~4GB (2096*4096), which is gross overkill for 100MB file blocks. I haven’t seen the directory get over slightly over 512MB from one of my backups with 100MB blocksizes, so 1.5-2GB is probably plenty, but YMMV. Experiment; start big, trim down.
Also note that, at present in 11-28 beta, the temp directory setting in one job seems to carry over to other jobs, which is baffling…and if you run a job using the “command line” GUI, lots of options are ignored or misparsed. Verify that the job is running as intended.
You can use the pre and post script options in your backup job to create the ramdisk and then post-job to unmount the drive (edit: do so via diskutil eject /Volume/volumename); it’ll cease to exist and free up the RAM. Note that you MUST destroy the drive afterward, as Duplicati does not clean always clean up after itself.
Linux users have the option of using tmpfs if their /tmp directory is not already set up that way. For example, on debian, check out the man page for /etc/default/tmpfs to enable it; Arch Linux also has a good page on the subject, as usual. Using tmpfs is advantageous because /tmp memory will be consumed/released as needed, and it pulls from physical and virtual memory. Unfortunately, not an option on MacOS.