I started using duplicati. I have 1/1Gb/s internet. My computer is Linux Ubuntu 22.04 - i5-8400 64GB RAM m2 nvme disk. My transfer is around 14MB/s. I tested this transfer on one file that is 600 GB using the default settings of the program with encryption. I saw that there are many more advanced options, can any of them speed up my sending? The file has been uploaded to Onedrive.
This is part of the problem. Feel free to search the Internet for thoughts about OneDrive speed.
Possibly there are ways to get more performance, but ignoring their attempts to slow you down potentially can cause a ban. Beyond specific extra-effort attempts, Duplicati has a bug ignoring slow-down requests from them per connection. You can add more connections using the option asynchronous-concurrent-upload-limit if the limiting factor is upload rates. You can check it with About → System info during a backup, and maybe you can see BackendFileProgress getting behind “BackendFileSize”. I had to watch it in my backup awhile before I could get this example:
because unless this is an initial backup, Duplicati backs up changes, and they take awhile to find. Upload doesn’t even begin until enough are found (default 50 MB remote volume size), and GUI status line speed sags down between uploads, as it’s an average. How did you measure speed?
Choosing sizes in Duplicati are spots to tune, if remote volume production is the bottleneck here. Default 100 KB blocksize is good for deduplication and bad for speed. If speed is the larger goal, blocksize can be increased hugely. For normal uses, it might be 100 KB per 100 GB total backup. Remote volume size increase might help a little on the upload speed, but will slow down restores.
So OneDrive has some bandwidth limitations? As I understand it, you can’t do too much. This is only suitable for transferring small files. For me it looks like this:
“BackendFileSize”:97593755180,“BackendFileProgress”:97604240940,"
I also tested other Duplicacy and rclone solutions, but the results were similar. Thank you for your answer
Your progress is only a little (10 MB) behind there, which is possibly normal during uploading.
I’m not sure exactly how it’s calculated though. There are other awkward things to see if the bottleneck is on the creation of files to upload, or the uploads. There’s a queue controlled by asynchronous-upload-limit (although there are some reports that it doesn’t actually operate).
When performing asynchronous uploads, Duplicati will create volumes that can be uploaded. To prevent Duplicati from generating too many volumes, this option limits the number of pending uploads. Set to zero to disable the limit
and the queue is probably in /tmp. Some of what’s there is in volumes that are being built for encryption and upload. I think they all begin with dup-, wind up at (default) 50 MB size for the remote volumes, and it’s hard to tell which are the upload-ready ones because size is similar.
Regardless, if you get a really large backlog of files there, upload might be falling behind the creation of volumes. If it’s keeping up, then try some creation speedups such as bigger sizes.
This is probably pretty good proof. Sometimes programs let you get more concurrent uploads, however maybe OneDrive is aware of such things and counteracts. They don’t publish details.
If you are looking for a solution with better speed, how about looking at distributed storage services based on free software, instead of centralized one on closed source? There are some options available natively on Duplicati, such as Sia Decentralized Cloud and Storj DCS.