Merging in concurrent_processing

After much talk (and much work) I am now ready to merge in the concurrent_processing branch into master.

This is quite a big rewrite, and refactors the entire backup process. I have been very carefull not to loose any fixes from the master branch, but due to the refactoring, I have moved the fixes and updates manually over multiple rounds, and this might cause trouble. Please report regressions, so we can fix them asap.

The concurrent implementation splits the backup steps into “processes” where each “process” is running independently (no shared variables) repeatedly reading input and writing output. This makes it easier to reason about each step, but slightly harder to see the big picture (how they are connected).

Once merged, we should no longer see the random errors that pop up because some error was suppressed somewhere and then later shows up.

The change also supports full parallelization, such that folder listing, block hashing, compression and upload are now fully independent and uses as many threads as there are in the system (can be limited via options).

5 Likes

I’m super excited for this! :smiley:

Time to show GNU/Hurd how it’s done :wink:

1 Like