Crash resilience is not so important. There is exception handling all the way, so it would have to crash in a nasty way for it to matter.
The problem solved with multiple processes is interference control. There are a bunch of shared variables and settings in .Net (basically anything static
) that is per-process. Most importantly, the ssl certificate validator can only be set for the entire process. That means if the user has a particular certificate hash allowed (or allows all certificates) this setting is applied to all http requests, including OAuth and updater checks.
(This design is because the http requests in .Net are pool based, so you cannot easily map a request to the source).
Another is cancellation where we want to force-stop a transfer. This is not currently possible for some backends, as they do not offer a cancellation token. It does sort-of work because we can Dispose
the source stream, but it is possible for the backend to hang. Due to worker pools (especially with Task
’s) it is not possible to simply kill the thread, as there is no telling which thread is actually doing the work.
That would be the next step. If we can use the Controller
class remotely, it is possible to run multiple backups side-by-side with no interference problems, and super simple kill functionality.
That sounds like the “RAID backend” idea where you split the uploads over multiple providers. I have not follow that idea as the failure detection is complicated. You need to somehow know which destinations have what files, and then deal with cases where two destinations report the same file but with different metadata or content.