Specifying multiple drives as backup destination?

What does container mean? If it meant the UnionFS idea, your initial proposal was to not go that way.
Estimating sizes might get difficult, but for a rough guess one could take the sizes of the source trees.
The downsides include figuring out what goes where, and administering smaller backups, which have advantages. If one of them breaks, it’s probably faster to fix. If it won’t fix, then there’s less of a loss…

One potential pain point of a large backup is database recreation (due to damage or loss in a disaster) works very hard to locate referenced blocks. Typically the dindex files say what dblock contains what. Loss of a dindex can leave an unresolved reference, causing a search through perhaps all the dblocks.

Raising blocksize means fewer blocks per dblock, smaller database, and faster SQL, but 30TB means roughly that much in dblock files, unless there’s a lot of compressible or redundant data to reduce size.

For whatever it’s worth, there are some people wondering how large they can push Kopia, a newer tool which at least seems to be in active development. One can decide on their own about its maturity level.

Maximum usable size of the repository? Petabyte scale possible?

EDIT:

I did a Google search on “TB” to see what the record was for Duplicati backup size. Unclear, but found:

Duplicati for large archive? (asking about 14TB and growing, and getting back ideas on how to handle)