I’ve used Duplicati for some time now and I simply love it.
I understand that even for big files only the differences are identified and only these differences are backed up.
I also understand that it is a file backup system, not a disk image backup system.
I have a script that uses the dd
command on Linux to create an image backup. I then shrink the image and shrink the image using PiShrink, and then compress the image using 7z. I am trying to use Duplicati to backup this file.
But I soon realised that every time a new image is created the whole new 7z file is uploaded even though the differences in the disk must be minimal. If I do not shrink the image and compress it, but instead use the raw image file created by dd
, will it help? (by uploading only differences and not taking too much space for each versions)?