Increase Index zip files on remote

What is the end intention? You should never have two active backups writing into the same destination.

To restore files on different hardware.
I cannot restore files without having a database present. (It seems?)

An index file does not exist by itself. It indexes a dblock file, as How the backup process works starts on.
Specifically, its size depends on how many blocks are in its dblock file.

I’ve increased the volume size in time. It was default 50Mb, I’ve changed it to 300Mb.

This would result in fewer blocks to track, unfortunately blocksize can’t be changed on an existing backup, so a fresh backup start would be needed.

Not an option. It took 10 years already to upload it. Not even fully. :wink:

Is the destination a very slow fetcher? How do you know processing the file isn’t slower than file fetching? Processing a dindex file means recording all of its blocks in the database. Large file wouldn’t change that.

Seems logical to me. Fetching 1x 1GB file versus 10.000 files from a remote seems slower to me.

(We are talking Google Drive here)