Increase Index zip files on remote

I want to recreate the database on a different server and it sits here at getting about 80k little files (about 200kb each) from the remote.

Can you change the index zip file file size somewhere, so instead of getting 80k little files you only need to fetch maybe 10 files from remote? It would make fetching extremely faster than it currently is.

Welcome to the forum @internet

What is the end intention? You should never have two active backups writing into the same destination.
If you are changing hardware, you can move the old database. For disaster recovery, you can use this:
Restoring files if your Duplicati installation is lost

An index file does not exist by itself. It indexes a dblock file, as How the backup process works starts on.
Specifically, its size depends on how many blocks are in its dblock file. This can be increased by having larger dblock files (controlled by Remote volume size on Options screen), however please first look at:
Choosing sizes in Duplicati and keep reading below. It’s not clear that fetching time is limiting the speed.

If at default 50 MB Remote volume size, this implies a 4 TB backup, so your blocksize might need raising.
It should probably be around 4 MB instead of the default 100 KB. This would result in fewer blocks to track, unfortunately blocksize can’t be changed on an existing backup, so a fresh backup start would be needed.

Is the destination a very slow fetcher? How do you know processing the file isn’t slower than file fetching? Processing a dindex file means recording all of its blocks in the database. Large file wouldn’t change that.

If you want to watch download and processing a bit, you can watch About → Show log → Live → Verbose
There’s heavier logging possible, and logging to file if you really want to see where the time is being spent.

Although I’m not sure increasing Remote volume size to get larger and fewer index files will raise speeds, doing that does not immediately change anything. Change happens at compact time, which isn’t normally particularly intensive, but a big raise in remote volume size could download and upload almost everything, because existing remote volumes will suddenly look underfilled. You do have some control on how it runs.

What is the end intention? You should never have two active backups writing into the same destination.

To restore files on different hardware.
I cannot restore files without having a database present. (It seems?)

An index file does not exist by itself. It indexes a dblock file, as How the backup process works starts on.
Specifically, its size depends on how many blocks are in its dblock file.

I’ve increased the volume size in time. It was default 50Mb, I’ve changed it to 300Mb.

This would result in fewer blocks to track, unfortunately blocksize can’t be changed on an existing backup, so a fresh backup start would be needed.

Not an option. It took 10 years already to upload it. Not even fully. :wink:

Is the destination a very slow fetcher? How do you know processing the file isn’t slower than file fetching? Processing a dindex file means recording all of its blocks in the database. Large file wouldn’t change that.

Seems logical to me. Fetching 1x 1GB file versus 10.000 files from a remote seems slower to me.

(We are talking Google Drive here)

Using “Direct restore from backup files” will create a partial temporary database tailored to that restore.

I hope that means in the past, and you survived the heavy compacting. If recent change, beware of that.
You are perhaps saved from compacting (for now) because you keep all versions, but you want change:

Remove remote versions after switching to smart retention

and I don’t want your compact to be very long like this case:

How long to compact, and why no logging? Is it stuck?

Yes, but (making up stats) if 10% of time is fetching, and 90% is processing blocks, fetch matters little.
You would have to look at your own logs to get some sense of timing, but it might slow as DB gets big.

Where is that option?
Selecting restore yields “fetching item list” which then fails because no database is present.

I hope that means in the past, and you survived the heavy compacting. If recent change, beware of that.
You are perhaps saved from compacting (for now) because you keep all versions, but you want change:

I didnt run compact yet.
I’ve changed the volume size after a good few hundred gigabytes into the backup tho.

Restore on left side, I suppose because it’s generic so not tied to a specific backup (you supply the info).

Interestingly, you can also get to the existing backups (but not for a disaster recovery) later down the list.
Restore from configuration would be nicer than manual entry, but is buggy. Save an export anyway.