I understand this may be something to do with docker and not your container but I’m scratching my head and fixing this. I have a system with a 2tb NVMe boot drive, which I also use for /var/lib/docker/volumes. I have a RAID6 array with 6-spinners and I bought one of those high-endurance NVMe’s to use for the cache (using bcache). My goal is to use this RAID array to backup my main server. After struggling with the linuxserver.io image and permissions I switched to the duplicati image and everything seems to be working as expected.
As I was running my first backup I noticed that my system drive (which at this time had my docker volume) was getting quite a bit of action, it appeared that everything getting written to my RAID drive was going through my system drive first. I do not want this as my system drive is a Samsung “consumer” drive and has 1200TBW rating. Since my source is a 40TB array there’s gonna be a lot of TBW going on. I’d like to keep the writing to the RAID array only.
I tried to fix this by moving the duplicati docker volume to the RAID array thinking it was just the docker volume caching but that didn’t fix it. I tried turning off the swap file and that didn’t fix it. I do have 128GB RAM and at any given time I’m only about 2-3gb used (unless running a backup) so I was hoping any caching would be “in memory”. I do see the docker volume will grow to about 5GB memory use which I don’t mind, if there was a way to set caching up in memory I’d be okay with that.
QUESTION: How can I eliminate all of this system-drive writing during backups? I’d like to get the data from the network and write it straight to the RAID array. the RAID array does have a NVMe cache device (read and write) and I do have excess RAM so if there are any settings I can change to keep the writes off my system disk it’d be appreciated. Thanks!