Say, want to backup folderA, folderB, folderC and folderD (total ~325GB).
Say, want to backup to x@googleDrive (100GB), y@mega(200GB), z@mega(50GB).
No restriction on which folder (or part of the folder) gets backed up to where, and should be decided by the software automatically. All folders should be backed up.
Duplicati cannot do multiple locations per job at the moment. As it stands each destination requires it’s own job. In your case a job structure like, job1 backs up folder A & folder B to x, job2 backs up folder C to y and job3 backs up folder D to z. You’ll need to monitor data growth to stay within your limits.
There is currently a feature request so that Duplicati can have multiple destinations per backup job i.e the same data in multiple locations but I think what you’re asking for is even beyond that, sounds more like you want Duplicati to use the multiple dissimilar destinations, as a single destination? If that is the case, I wouldn’t count on it happening any time soon.
@JimboJones is correct. I’m not even sure the feature request for multiple destinations per backup job can happen any time soon (it depends on volunteers, and they’re few). Thanks BTW for helping in forum. Every way the Duplicati community can volunteer their time and skill helps and is very much appreciated.
I did some web searches for an external product that could do that. I didn’t really find one that clearly could. Some could only distribute in a fixed admin-controlled fashion. Others didn’t have any API for the local side. Some high-price-looking ones didn’t say much on their sites, but invited visitors to contact them for details.
The RAID backend #479 is (I think) asking for something like the OP is looking for. It discusses challenges.
I’m not sure if @azuck would use additional software or devices, but if anybody knows any, please inform.
One of my early thoughts was ownCloud/Nextcloud because they use a database and an external storage possibility, and a programmatic API on the Duplicati side, but external storage didn’t “appear” self-adjusting.
Unraid is self-adjusting across multiple drives, but I think it uses FS-level support and drives are not clouds.
If figuring out max storage (for each cloud) is problem, can it be set to user given?
Additionally, in initial stage/POC redundancy in cloud backup can be ignored.
Definitely I don’t know anything about implementation, or how duplicati works in the back and all… Just giving wild guess:
Is it a partition problem?
Did somebody say that? Most cloud storage can give a free space value, and if so, Duplicati already has it.
You can look in a cloud backup log Complete log at FreeQuotaSpace in the BackendStatistics section.
It seemed like there was some discussion on whether redundancy was a goal, or just aggregating space.
There’s not much point designing the fine details without developer input, and Duplicati needs more of them even to get the current Beta release to a Stable one. Now is not the time to tear code up for new features in my opinion. Someone could try working on it for future use, but there are many other feature requests too.