Is this normal performance, taking a day for less than 400GB to a local external SATA drive? I started slightly before noon and 9 hours later, only 50% done.
With the old Crashplan, the same dataset could be done from scratch in less than two hours using the same drive after having been formatted.
A lot of factors can affect performance so it’s hard to say what’s “normal”. That being said, comparing performance of Duplicati to CrashPlan is best due done after tweaking some of Duplicati’s default settings to more closely match how we GUESS CrashPlan works.
Of course speed isn’t everything, so be sure to consider disk space used (local and destination) as well as other resources (Duplicati likely not uses less memory).
And be sure you’re testing similar cases. For example, is CrashPlan sleeping (or better yet the service shut down) when Duplicati runs?
All that sounds like I’m making excuses for Duplicati, but I’m not. CrashPlan may easily be faster than Duplicati during the actual backup. For example, Duplicati scans the source folder when the backup stars while CrashPlan seems to constantly be scanning whether a backup is running or not, so when one does start all has less to do compared to Duplicati.
Personally, on my machines the lower resources used by Duplicati when NOT backing up are worth a potentially slower backup.
Please note that depending on what Duplicati version you are running you may not have the latest performance engagements. On top of that there are some more that haven’t made it to a release yet.
Perhaps somebody who has done some actual benchmarks with Duplicati can chime in…
First, it’s still running as I type this reply, 30GB left to go
At this point, I need to switch from Crashplan so any help finding a replacement is welcomed. Doesn’t feel like an excuse either
I wanted to add some details:
Before I ran the backup process, I formatted my external HDD. With Crashplan, I also did this. No need to sync and compare, just dump and encrypt the files to the external HDD. I found it considerably faster this way.
Both Crashplan and Duplicati are set to run once a day. With my minimal data changes, it never takes any time for Crashplan and assume that it had minimal impacts on Duplicati
Duplicati process shows CPU: 15-25%, Mem: 57 MB, Disk: 15-26MB/s
I currently plan to use Backblace B2 and my current external HDD
I’m using the version: Duplicati - 2.0.2.1_beta_2017-08-01
Edit
I started from scratch using another app (Arq Backup)… quick calculations brings it to about 12 hours before completion (less than 2min / GB) for that one, CPU and Disk is approx. the same.
Used default settings for everything in Duplicati and Arq Backup.
200GB in 9 hours is incredibly slow, that’s about 53Mbps or 6.5MB/s. Can’t think of any reason why it’d be going that slow. What kind of CPU do you have?
The closest I had to test with was setting up a local backup to an internal 5900RPM SATA disk. 200GB (~18K files) completed an initial backup in a little under 3 hours with mostly default settings. While it was backing up it was maxing out a single core, and using between 100-150MB of RAM.
Of the three non-default things I’m doing, only the first one is likely to have much impact on performance:
Tell Windows Defender (or whatever AV you have) to ignore the Duplicati.Server.exe service
Run as a service (using SYSTEM account)
Set Up VSS
I don’t think Duplicati is the fastest (very interested in performance after multi-threading gets added), but it is adequately quick and I haven’t experienced it being infuriatingly slow either. And I second @JonMikelV, I’d trade a good bit of speed for the refreshingly low memory utilization compared to the gigabytes of RAM CrashPlan would chew up.
I’m also curious why some people seem to be getting such slow speeds. I know there are lots of external factors that can reduce speed (good catch on AV) but it would be great if we could identify some of the more common ones.
My initial 200GiB backup to a NAS also took 9h, and the second run (with 6GiB new data) took 22 Minutes. There is a strong disk activity in the temp folder and 50% CPU load on my i5-6200.
What volume size are you using, out of curiosity? I’ve talked about this elsewhere – for a local drive backup, I found that the default 50MB volume size is way too small, as Duplicati ends up spending most of its time preparing and compressing the data blocks, instead of spending its time utilizing the upload / local drive bandwidth.
For mine I settled on 2GB volume sizes (NOTE: again this is for LOCAL backups only, you should not do this for B2), because it ends up with quite a bit less clutter and less file processing overhead as far as I can tell. I’m not sure what official benchmarks would show though, i’ll have to try it soon. I’d also suggest trying out 500MB and/or 1GB volume sizes if 2GB sounds too big - even the smallest of these is 10x fewer dblock files than the default size.
FWIW, I did a test with a bigger block size (500MB) and was seeing high writes to disk, sometimes it peaks in the 100MB/s. However, it fluctuates quite a bit, coming back down below 20MB/s. Previously, I think it was pretty constant at 2xMB/s.
After a short while, I nuked my external HDD, started VSS (it was set as manual startup) and installed Duplicati as a service. I seem to be getting the same performance as I had by simply increasing the block size.
I tried the --zip-compression-level=1 and I did notice more frequent writes to my external HDD. However, we’re still talking 10+ hours and it still had 100GB+ to go. I want encryption in there though.
Unfortunately, I don’t have a spare drive. The other partition (ie. F) contains the folder that I’m going to be backing up (movies, pictures). Not sure if it’s wise to put the temp file on that drive.