Extremely slow backup over LAN

hmm, I’m submitting a PR (#3021) to add m2ts, but I’d need to see some testing of compression ratios for ARW before adding that.

It should compress fairly poorly before it makes sense to force on all users.

1 Like

I’d be happy to help with that if you can give me any guidelines. I have TONS of ARW files to test with :slight_smile:

If you think your uploads are happening so fast they have to wait for the next archive to be prepped then you could try different --asynchronous-upload-limit settings (default is 4) to control how many archive files “ahead” will be generated.

Thought it sounds like you might be at the other side of things where your CPU is so busy generating another archive volume that it can’t pay attention to any already going uploads.

Well, ideally we would want to test with SharpCompress directly since that’s what Duplicati uses.
But that kind of requires me to write a small C# wrapper that’ll make it easy to compress those files on CLI and display the ratio.

Alternatively, it can be tested by creating a couple of backup jobs with just a couple ARW files in it. Duplicati will deduplicate the files, but if we have 3-4 jobs where the only difference is the compression level we can see what kind of real world result to expect at full/medium/no compression. This would also be a good way to see what kind of performance each compression level gives you :slight_smile:

If you set your --blocksize to be larger than your largest target test file then the only deduplication that would occur would be for exact files.

Of course a change like that could render any resulting time tests meaningless in the real world…

Well my ARW files are all under 25MB each which is smaller than the block size - I’m guessing dedup would always be running in that case. I could run the backup, delete the data, and redo with a different compression level.

Also, I moved the temporary storage for Duplicati to a much faster drive and my transfer speeds went up significantly.

There’s no more hard edges on each file transfer, plus the average and minimum speeds are up a lot. The previous drive was on a pretty old and fairly busy single HDD. The new one is an NFS share but with an SSD Cache.

Good catch on the drive performance bottleneck! :smiley:

Since you’re working on a Raspberry Pi I’m assuming this is not really useful to you, but I believe an option is being added to allow these temp files to be created fully in memory (an alternative is to use a ram drive).

It would be interesting if Duplicati could self-monitor how much time is spent waiting for a particular resource and offer suggestions on ways to improve backup performance. :thinking:

The backup jobs can’t share dedup, so it should be fine to keep them all during the test. But it’s always good to keep them in different folders so they don’t confuse each other :slight_smile:

Oooh temporary files in memory sounds pretty great actually! Duplicati is running in an LXC container in my setup on a Proxmox host. The Pi is just a backup destination - really just a glorified SFTP server.

Right, that makes sense. I’ll do some tests later and update with results.

Self diagnostics would be really cool. I think some clever placed log entries with timestamps before/after various steps could allow diagnostics to be done in the GUI or an external monitor :slight_smile: