Backing up 2TB - 50mb remote volume size okay?

I have 2TB of media files (music and videos) that I’m backing up to a locally connected HDD. Is using a “remote volume size” of 50mb okay or should I bump this up a lot?

Is the block size of 100kb okay to leave?

1 Like

I have 5 TB that I am backing up remotely and use a volume size of 200MB. I currently have over 40k files in my backup folder and a database file that is over 15GB. Since you are using a local HDD, I’d suggest using a larger volume size, say 500MB or even 1GB.
You can read about volume size and increasing the volume size here: Volume Size.

Thanks for the feedback. I had read the article but still wasn’t sure what would be best for me. In your case, did you change your block size from 100kb? I’m doing some backups tests right now and using 1mb.

I picked 200 MB block size from the default 50 MB size. At the time I started using Duplicati about 3 years ago, I only had 3 TB to back up, and my upload speed was 1/7 of what I have now. Looking back I wish I’d started with a 500 MB block, and am considering changing the block size to 500 MB or 1 GB as I type this.

Block size is 100kb by default. I think you’re thinking of the remote volume size, which is 50mb by default. So I assume you haven’t changed that. :slight_smile:

You are correct, I am thinking about volume size, not block size. I left the block size at the default 100 kb.

1 Like

Remote volume size probably doesn’t matter a whole lot, unless you have a back end that limits the number of files that can be stored.

The deduplication block size is a more important detail in my opinion. For such a large backup (2TB), I would increase it from the default 100KiB to probably 1MiB - 5MiB. This will reduce the number of blocks that have to be tracked, which helps performance and keeps the local database smaller. Deduplication can suffer a bit by increasing the block size, but in the case of media files it probably won’t matter much.

Unfortunately you cannot change the deduplication block size unless you’re willing to start over with a fresh backup.

I have a rough rule of thumb saying the DB gets slow with more than a million blocks (because I’ve seen it thrash around with more), so that gives 2MiB blocksize which happens to fall into the range given above…

I consider this a dangling issue (there’s a GitHub issue open) because it’s easy to miss. Thanks for asking.

Thanks for the advice @drwtsn32 and @ts678 .

One follow-up question. I’m backup up my media now with 50mb volume and 3mb block.

Is it normal the dindex files are all 2kb? I assume so since it just backing up large videos right now but I want to make sure that sounds fine before I wait hours for it to finish.

I would say yes as it’s indexing about 17 chunks per volume .

(my 200 MB volumes have index files that range from 300 KB to 600 KB with a 100 KB chunk size)