Upload pauses every ~10 seconds on Synology NAS


I just installed Duplicati on my Synology DS213j. I chose version because it’s the closest to the stable Beta (v2.0.3.3) that has support for OneDrive v2 (MS Graph), which is my back end. My Internet upload speed is 10 Mbps. The backup works, but the upload seems to proceed in bursts of ~10 seconds followed by ~10 seconds of idle time. Thus, the average upload rate is only about 50% of my available upload bandwidth.

I’m using the default block and dblock sizes. At first I thought that maybe the pauses indicate the individual 50-MB blocks being uploaded, but this is not possible because at 10 Mbps, each block would require 40 seconds (50 / (10 / 8)), not 10 seconds. Any idea why this is happening? I’d like to smooth out the upload rate and max out my upload bandwidth if possible.


Hi @cinergi, welcome to the forum!

Have you monitored any other resources, such as CPU or disk IO? It’s possible the upload process requires CPU cycles that are also need for the zip file creation.

It’s less likely, but your destination might be the limiting factor where it’s busy processing / saving what’s been uploaded and asks the sender to pause.

If you see CPU spikes, consider doing a local backup and see if they go away.

Thanks! Yes, I’ve been monitoring the CPU usage and it does spike up to 100%. However, I understand that Duplicati compresses, encrypts & sends a 50MB (by default) block, then proceeds to do the same for the next block. Based on this, I would expect that no compression or encryption is going on while a block is being uploaded (is this true)? In this case, the upload should proceed uninterrupted for each 50MB block, which at 10 Mbps would require ~40 seconds, not the 10-second bursts I’m seeing.

Even if compression & encryption are going on in the background, wouldn’t the load on the CPU be relatively smooth? I’m seeing a very clear cycle of 10 seconds upload / 10 seconds idle.

I tried using Synology Cloud Sync on the same NAS (not at the same time as Duplicati), which is a file synchronization application that doesn’t have versioning and is not block-based like Duplicati. Using the same OneDrive back end, Cloud Sync is able to fully saturate my available 10 Mbps uplink bandwidth, which eliminates OneDrive and my Internet connection as possible causes. However, since Cloud Sync does not do versioning, is not block-based, and does not calculate a hash for each block, its CPU requirements are probably lower than Duplicati’s.


I think there’s more concurrency than you describe, however I can’t tell you exactly how much. I’m pretty sure the dblock zip is stream-created block by block. That is a necessity for –compression-extension-file to be able to turn off compression for individual source file blocks within a dblock file (which may contain data from many sources). Shortly under that option you can see some other concurrency controls (some of which might not have shown up until which added more concurrency). Also see –asynchronous-upload-limit and –synchronous-upload… You could also look in /tmp (probably) to see staging of outbound files, observe About --> Show log --> Live, etc. This will probably take some experimentation just to understand, then whether it can be changed is the question.