Very slow to backup

My sentiments exactly
The backed-up computer and the NAS computer are connected via a normal Ethernet connection (cable) with only a switch in between and are about 2 meters apart. Using FTP as described previously??

Don’t know how to help with that. I think you should do some speed measurements with something like FileZilla to see if you can pinpoint the performance problem.

FileZilla runs perfectly and is the reason why I chose the FTP route.

But what speed do you get on transfers through FileZilla ?

Same problem here.
112019 files (426.42 GB) to go at 5.70 KB/s
Pure internal network.
FTP on the target machine does put w/ 100MByte/sec
Is there any default throttling on Duplicati with ftp?
THanks,
Mike

Currently I am suspecting it is the prep work Duplicati does on the machine itself. It is Duplicati 2.0.2.8 with QMono 4.2.1.0 running on a QNAP TS-559 Pro+
All ftp tests using the same route showed >60MByte/sec.
Duplicati seems to run only on one core at a time (poor overlap on peaks on cores) and a lot of idle time…


I already set the options:
–zip-compression-level=0
–disable-streaming-transfers=false
–synchronous-upload=false
–thread-priority=abovenormal
–throttle-upload=100MB

Any further thoughts?

Thanks,
Mike

What version of Duplicati are you using? I believe there are some multi-thread options that are available (or may soon appear) in the Canary versions.

My scenario isn’t exactly the same as my back end/target is a Windows 10 netbook running Bitvise WinSSHD and I’m using SFTP instead of straight FTP. Figured it was worth mentioning though as I didn’t observe any slowness and was able to max out the target’s 100Mbp/s network adapter.

My initial backup set (source was also a Win10 desktop) was about 2.5TB (not sure exactly, but about 300K files) and it would upload at about 11MB/s actual throughput. As you observed the CPU and network usage was a bit “saw toothed” due to the singled threaded nature of the process, but not enough to significantly impact overall throughput.

What are you using as your FTP destination?

I’m also running defaults for all the settings you listed. Do you see any significant difference one way or the other if you remove all the custom advanced settings?

@JonMikelV Version see above: 2.0.2.8_canary_2017-0-20
@sanderson ftp target is a newer QNAP 100MByte/sec ftp upload. Also tested from the shell on the other machine (where duplicati runs) and it is definitely NOT the bottleneck.
Disks in the QNAP running Duplicati are a 4 disk RAID setup WD Red3TB also no bottleneck.
I think Duplicati is somewhat slow in preparing the data.
Could the Momo version have an impact?
I am not encrypting and I am ot compressing, so it should really be a piece of cake to prep the data for ftp…

Mike

Thanks for the added detail.

I know mono version can make a difference in some areas, like SSL certificates, but don’t know if it effects speed.

Duplicati is currently (mostly) single threaded. I am working on a version that is multithreaded. My guess is that the CPU is too slow on one core, which is why it does not upload faster (there is not enough data).

I have not heard of slow-downs due to old Mono, but you could try the “aFTP” backend, as it implements the transfers a bit different.

What volume size are you using within Duplicati? For local (or local network) backups I believe it should be most efficient to increase the volume size by quite a bit compared to the default of 50MB - for my local (attached) HDD backup, for example, I had good success with 2GB volumes. I can’t prove that that’s an optimal size or anything, but my theory is that at 50MB, duplicati is spending a lot of time bouncing around between uploading one volume and then prepping / uploading the next one, etc, and at larger volume sizes, it will spend more time in the “sprint” of uploading hopefully utilizing the full upload bandwidth.

Just be aware that with default settings but 2G volume sizes you’ll be needing 8G of “temp” space (2G each x 4 ‘work ahead’ volumes) in which to hold the compressed files while waiting for uploads to finish. Oh - and of course when it comes time to verify files (assuming you haven’t disabled that) you’ll be transfering 2G chunks as well.

If space is an issue yet you still want to use the larger volume sizes I believe there are some parameters to help get around that (such as --asynchronous-upload-folder, --asynchronous-upload-limit, and tempdir ).

1 Like

Good point, though I’ve previously overridden my Temp folder to use a special temp folder on my secondary (spinning) HDD, which at 3TB is only around half full. I would’ve assumed (though maybe wrongly) that 8GB of temp space wouldn’t be too much of a blocker for most use cases.

This is why I recommend this large volume size only for local / very fast backup destinations - for instance on B2, you start paying (pennies) for download bandwidth after 1GB per day, so a lower volume size is optimal there.

1 Like

i’ve tried the last canary version duplicati-2.0.3.7_canary_2018-06-17.spk on my Synology 218play:
the cpu usage now is around 6-7% while before the update was always around 1-2%.
To backup 850Gb with no compression over LAN it taken more then 4 days to backup the first 400Gb…
with thread-priority=abovenormal
zip-compression-level=0
chunk size= 100Mb
i think the problem is still in the multithreaded mode…
are you still working on it?

Hi @adavide, welcome to the forum and thanks for using Duplicati!

Yes, the multi-threaded code is still being worked on, but is also still considered very “fresh” which is why it has so far only been released in the Canary builds.

You mentioned that “before the update” to 2.0.3.7 you were getting 1-2% CPU usage on your Synology 218play - do you recall what version you were using when you got those numbers?

Thanks @JonMikelV
Yes was version 2.0.3.3
At the moment i give up because for the second time Duplicati get the system partition full. I set tempdir option to use Volume1 but still create the sqlite database in the .config/Duplicati directory. I’ve tried twice with different jobs but i didn’t succeed …

I suspect you’re running into the but bug in 2.0.3.6 & 2.0.3.7 (when all the multi-threading code was added) where temp files are not correctly cleaned up.

This has been fixed but not released yet (though I expect it quite soon):

1 Like

That’s right! I also notice a big increase of space used on my Volume1 about of the same size of the backup (500gb) and when i uninstalled duplicati it give me back the space.

But i think this is different from the problem, on top, of *.sqlite files saved in the root folder of the NAS…

I will be happy to try again with the new release

I am having this problem too. Whenever I use SFTP to any device to any device it never transfers over 10mb/s and more often than not it is much slower than that. I even tested it to the SSH account on my local Ubuntu mediabox and I get the same slow results. BUMP!!!

(Above aside, this is the best OS backup program I have EVER seen. And I greatly appreciate the product and the work that has been put into it.)