Very slow backup on large files

I tried to backup multiple running virtual machines qcow2 files which sizes are about 200GB-2T each to S3 storage. I notice that duplicati take almost forever to backup, one CPU core is 100% at whole time and no network traffic at all. I just let duplicati run for few hours then stop the process.
What is the best way to backup large files? Should I increase 50MB chunk?

In first run, duplicati have to compute hash for every file and compress it/send it.
For next backup ,only date of laste change is checked (by default)

For those big file, hasing will be long…
What is disk IO during backup? (ready MB/s)
And maybe try lowering compress settings to 1 or 2.

Seeing as your source files are large the backup will always run for a very long time. The only suggestion I would make here is to change the hash to something simpler like MD5 (if your CPU doesn’t support SHA acceleration) and change the DBlock size to something large like 128MiB or 256MiB. Apart from that there’s not much else you can do with Duplicati or any other program that backs up using delta blocks.

Hi @hiennh, welcome to the forum!

If you’re backing up the raw qcow2 files (rather than the CONTENTS of those virtual drives like from inside the VMs) it may seem like the the files have a lot of changes every backup. This is due to how virtual disk images (especially encrypted ones) are written.