Extremely slow backup over LAN

I’m trying to backup my media library using Duplicati and I’m getting extremely slow average transfer speeds - around 2.85 MB/s. My library has almost 374,000 files totaling about 3.25TB and according to my calculations this would take almost 2 weeks. The files are almost all mpeg2, m2ts, cr2 (Canon RAW), and ARW (Sony RAW), and a few jpeg files. Duplicati is running with the default encryption and compression settings as well.

Duplicati is running on a QNAP NAS backing up to a new 8TB drive on a Raspberry Pi 2 (4 cores at 900Mhz) over SFTP. Everything is hard wired to the same switch in terms of networking. The load on the Pi is very low with a load average of 0.55/0.65/0.66, and the NAS also has a fairly low load of 1.66/1.79/1.85. Neither device is anywhere near their RAM limits either.

Is there anything I can do to speed up the initial backup? I don’t mind these speeds after the first backup since the whole Pi and HDD will be moved offsite and I won’t be able to sustain 3MB/s upload anyway.

Many thanks in advance for any insight/help.

You could pull the 8TB drive from the RPI and plug it into your media server. Then back up to it as local storage (provided that they can read and write the same file system type).

If it’s faster you can let it finish and then move the disk back to the RPI and change the backup job to point over SFTP again. It won’t matter how the backup accesses the backup files it just matters that they’re there.

1 Like

Ah that is certainly something I didn’t consider. I will give that shot and see how it fares.Thanks for the help :slight_smile:

1 Like

I’ve done as suggested and now the speeds are… different, but not faster.

Take a look: duplicati1 - Streamable

Its in the KB/s range for about a minute and then spikes up to ~120MB/s (near the end of the video) for a second and then goes back down to the KB/s range. I’ve overlayed htop on the NAS and it looks like the bottleneck here is the CPU?

Would this mean that if I started the backup process on a computer with a faster CPU, the process should go by much faster? I’d have to mount my source data over Samba and I could probably connect the backup drive directly.

EDIT:

I’ve started the backup from a Windows computer with an i7 4790k in it and the backup is orders of magnitude faster. I’m where I was after over 12 hours on the NAS in a little over 20 minutes.

However, Duplicati recognized over 375,000 files when it ran on the NAS, while on the Windows computer its a little over 93,000 files. Both times the total size to backup was identical - 3.25TB but I have no idea why this large discrepancy in file count exists. I double checked and in each case, the backup source is identical.

Hmm, sounds CPU bottlenecked, yes. This might not be so bad if we can get LZ4 compression implemented for Duplicati.

I believe there is an other thread about preseeding data from an other machine on the forum here Possible to relink an existing backup store (for pre-seeding purposes)

That might be helpful. Otherwise the options might be either just waiting for it to finish or tweaking the --zip-compression-level to see if that helps performance at the cost of compression ratio. Note that the compression level has to be set before any data has been uploaded because there isn’t a built in way to change it later.

Just to clarify, you CAN change --zip-compression-level at any time without problem, however only compressed files created after the change will have the new level. Old files won’t be automatically re-compressed UNLESS they are part of a compact processes.

Oh, cool. I wasn’t sure so I didn’t wanna cause people to ruin their backup :slight_smile:

Good call being cautious! But as far as I know the only settings that can’t be changed are --blocksize --block-hash-algorithm and encryption passphrase. In all 3 cases my tests show that the CLI will throw an error and the GUI will show a message about not being able to change that setting.

Doesn’t the volume size cause problems as well?

Nope. You can change --block-size (aka “Upload volume size”) any time without problems however just as with --zip-compression-level the new value will only be applied to archives created going forward. Existing ones won’t be automatically re-compressed at the new size unless part of a compact process.

I believe there are some manual steps you can take to force a re-compress of everything, but I’ve never tried it myself. If you’re interested let me know and I’ll try to find the related posts.

1 Like

Considering most of my files are almost incompressible video files, perhaps reducing the compression level is the way to go.

Any idea about the giant discrepancy in file count?

Compression level won’t matter for already compressed files. There is a list of file extensions Duplicati checks and if your file has one of those types it’s not compressed at all.
You can see the file extensions here: duplicati/default_compressed_extensions.txt at master · duplicati/duplicati · GitHub

I’m not sure what could be causing the discrepancy in file counts. Maybe it’s a bug in the file counter? The counter is a separate process from the actual backup so if it’s only in the counter it shouldn’t matter to the backup result. But I don’t know how you’d know until you finished the backup and moved back to the NAS:

I was just looking at that file actually.

I have mostly m2ts and arw files which are compressed but not found in that text file. I’ve edited the file to exclude those types as well. Hopefully that speeds things up a bit.

I’m assuming the encryption process is more CPU-bound than the compression process?

Both are mostly CPU bound. Compression seemed to have the biggest impact in tests done by @dgcom here: Duplicati 2 vs. Duplicacy 2.

Default compression level is 9 and going down to 1 cut about 36% off his backup time in one of the tests. An other test using level 0 cut that down about 45%. And the test with turning off encryption cut about 7% (for a total of 52% with level 0 compression and no encryption)

Of course these things could also depend on memory and disk speeds, or even network speeds if you end up bottlenecked there.

2 Likes

I don’t know if that file is dynamically loaded or not - if adding those extensions works for you, please let us know so we can try and get them in the official release file.

I restarted Duplicati after making the changes - just to be sure.

I ran into the restoring from a different OS issue so I restarted the backup but the speeds seem to be significantly better. The Pi only has a 100Mbps port but I’m averaging around 45Mbps - still going to take 6 days to finish at this rate, but its half as long as the original estimate.

I’ll see if the transfer speed stays consistent over the next little while.

What settings did you end up with? Is it mostly compression that slowed you down?

I added the m2ts and arw files to the compression ignore list and that seems to have done the trick. So yes, looks like it was the compression.

This is what the incoming traffic on the backup destination looks like:

Reports a 45Mbps average over 5 minutes. You can clearly see where each 50MB chunk starts and finishes.

Hmm, interesting. That’s some performance gain.

m2ts is likely compressed, but isn’t ARW raw uncompressed video? That would compress well

ARW is similar to Canon’s CR2. It should be an uncompressed image file but by most reports its actually a compressed format. I did (naively) a quick compression test with a few ARW files into a zip file and the resulting output was barely smaller than the original.