This is not a “non issue.” I’ve read the techreport test.
" Developed by a this handy little app includes a dedicated endurance test that fills drives with files of varying sizes before deleting them and starting the process anew."
That is not how the vast majority use their laptop, desktop, or server hard drives, and just by total coincidence, it also happens to be the most favorable scenario for an SSD’s wear longevity.
- The test is compressed over a very short period of time, so caches have maximum chance at absorbing re-writes
- No already-written blocks are repeatedly re-written except for directory structure information
- Re-writes are spread across a very large portion of the flash cells, and after every test cycle, the entire drive is freed for the wear-leveling algorithm to use again
Again: this scenario does not represent anyone’s real-world usage. I can think of two use cases that fit: videographers (many digital cinema cameras write directly to SATA or similar drive modules) and people using said devices as backup destinations by traditional backup programs (think Legato and the like) where the volume is written to, read for verification, then recycled by re-writing start to finish.
For most users, 50% to as much as 90% or more of the drive is filled with data and much of that does not change. If your drive is 90% full, unless it implements static wear leveling, its wear capacity is reduced tenfold because each write must result in a re-write in that 10% of available flash memory.
Note that I am assuming drives are not over-provisioned. This is partly because drive manufacturers do not generally advertise over-provisioning levels; I also don’t know how many SSDs actually implement static wear leveling; the industry similarly hides behind “we do wear leveling!” It’s probably safe to assume that most SATA and NVME SSDs implement at least dynamic wear leveling, but I would not be surprised if static wear leveling doesn’t exist except in the high-end desktop and enterprise drives.
Anyway. This is why Duplicati’s backup process causes so much wear (on a drive that doesn’t do static wear leveling): if you have a 500GB drive, and it’s got 450GB of data of which 300GB is backed up by Duplicati, that results in 6 wear cycles on that 10% of the drive. And if you have extra verify options turned on - every verification results in more writes as well because the verification streams the block to disk, not memory.
Worse, that 10% of the drive is absorbing the vast majority of re-writes due to user and system activity; logs, system updates, filesystem metadata, virtual memory (glares at Chrome), application caches, email clients, and so on.