Specifying multiple drives as backup destination?

Developer documentation explains its simple storage model that big numbers of storage types allow:

The backends encapsulate the actual communication with a remote host with a simple abstraction, namely that the backend can perform 4 operations: GET, PUT, LIST, DELETE. All operations performed by Duplicati relies on only these operations, enabling almost any storage to be implemented as a destination.

But the aim for minimal support works against your wish for very specialized support. That needs help.

Backup to multiple cloud is a similar question that got some unlikely options that you could confirm fail. Fortunately local storage is a little easier, and UnionFS might be possible, although it likely adds risks, including drive failure or setup and operation accidents. Different RAID levels add different protections.

Setting up mergerfs on JBOD’s (or a poor mans storage array) is (I think) a step removed from kernel support, using FUSE, but I’m not sure that it can aggregate folders easily or at all, but it may be “light”.

On the heavy end of the scale is another system or maybe a VM on this one. Not my area, but a post:

Explain to me like I’m 5: Why would I use TrueNAS Scale over Unraid?

(rattles off some names that I’m not familiar with – but other forum users use some and “might” assist)

You might also be able to find or build a “better than nothing” file copier program that will fill-then-move. Keeping up with changes would be the challenge.

If you find a way to give Duplicati file or network storage, all I’ll say is to beware of performance for that amount of data. Raising blocksize so there are no more than a few million blocks helps the SQL speed.

I tried a fast Google search on a Reddit where people worry about larger data, and even found a 30TB:

Backing up a 30TB dataset on to multiple 8TB disks? (102 comments – maybe something there helps)

EDIT:

What is the correct syntax for an on-the-fly union remote? and Union and –backup-dir might help store, although whether or not it would work with Duplicati Rclone backend (for 4 operations at top) is unclear. Tapping into the interface rclone uses with your own scripting would allow dangerous do-it-yourselfing.