About this Remote volume size thing... Specific use case question

Hi all. New Duplicati user here. Awesome software. Many thanks for that :slight_smile:

I’ve been reading about this ‘Remote volume size’ thing… That you can increase or decrease that number, and that that may have a positive (or negative) impact on the speed of the backup and restore process. But to me it feels like somewhat of a trial and error-thing per use case. There don’t seem to be specific numbers to abide by, other than the default 50MB.

From the manual:
If you have large datasets and a stable connection, it is usually desirable to user a larger volume size. Using volume sizes larger than 1gb usually gives slowdowns as the files are slower to process, but there is no limit applied.
So. Larger than 50MB. But maybe smaller than 1GB…

So now I’m wondering if someone has a similar backup situation like I have here, and if that someone has maybe figured out the “ultimate setting” regarding ‘Remote volume size’, for optimal speed in the backup and restore process.

I have a folder here with 1.5TB of files. The average file size is about 600MB.
All is to be backed up to an off-site SFTP-server.
My downspeed as well as upspeed is 200Mbit.
The downspeed and upspeed of the off-site location is 120/12Mbit.

It’d be a shame if I were to use settings that make my entire 1.5TB process maybe take twice as long as needed. So what would you recommend in this regard?

Thanks!

Welcome to the forum! Just noticed your thread has no replies yet.

I don’t think there’s an “ultimate setting” - there are simply too many variables. Personally I leave my volume size set to 50MiB, but I’m using a back end that has no file count limit. (I think that is the biggest driver to increasing it.)

Increasing the size may speed up backups, but it will also slow down restores. Where to balance this is totally up to you.

Thanks for you reply!!

So. The file count limit. The drive Duplicati is writing to is an ext4 drive.
I believe this means I would have to keep the number of files lower than 2^32 - 1 (4,294,967,295).
At the moment I’m at 170,817. So in that regard I’m now at approximately 0.004%.

About this though…

I don’t understand why it will slow down restores.

//edited to add: Found in the FAQ why increasing the remote volume size it would slow down restores:

Anyway, if I understand correctly the upload- and downloadspeeds I was talking about are actually irrelevant when trying to determine the ideal ‘Remote Volume Size’.

Maybe I don’t understand what you mean but I don’t think upload/download speed is irrelevant. If you use 1GB volumes and want to restore a 1MB file, Duplicati will have to read at least one 1GB volume file. So restores can definitely be slower with larger volumes.

Are you saying that because your backups are to a local disk? I suppose it would be less of an issue in that case.

As far as file count limits, you definitely won’t have issue with an ext4 volume. I’m thinking more of OneDrive back ends that are limited to 5000 files.

Got it now on upload/download speed as well as remove volume size.

(And no, it’s not local. It’s an external ext4 SFTP location.)

Thanks!