Backing up huge disk image file

I’ve used Duplicati for some time now and I simply love it.

I understand that even for big files only the differences are identified and only these differences are backed up.

I also understand that it is a file backup system, not a disk image backup system.

I have a script that uses the dd command on Linux to create an image backup. I then shrink the image and shrink the image using PiShrink, and then compress the image using 7z. I am trying to use Duplicati to backup this file.

But I soon realised that every time a new image is created the whole new 7z file is uploaded even though the differences in the disk must be minimal. If I do not shrink the image and compress it, but instead use the raw image file created by dd, will it help? (by uploading only differences and not taking too much space for each versions)?

Welcome to the forum!

Yes, I’m guessing that would make a big difference. Duplicati’s deduplication engine is pretty simple-minded, and it can’t really handle small changes in data files when those files are then compressed (before Duplicati processing). The compression ends up making bigger changes in the output file, potentially defeating Duplicati’s deduplication engine.

Give it a shot and let us know how it works without pre-compressing the file first.

1 Like

Thanks @drwtsn32

I tried this and it works :slight_smile:

1 Like