The problem is the ‘commit volunteer’ is really a maintainer (you don’t want a clown that commit anything that come in and walk away when problems happen I guess ?).
As I understand it, you are searching for this for quite some time and so far no one has stepped up, worse, other experienced developers are missing too.
The only conclusion is that @kenkendk is unreplaceable.
It’s all right really, there are only 2 ways : first, drop Duplicati. Second, find a way to make the project go forward without an unreplaceable manager. In my opinion, the only realist way is to do the opposite of what has been done up to now. Currently (until he half retired), the project was the epitome of the bus factor = 1. One person was managing most of the project, doing and supervising everything. The problem is that as soon as this person has some stress to manage outside of the project, the burnout is threatening.
The solution is to replace a single person by a team where stressing problems are divided and shared.
The difficulty is to manage the transition.
Here is how I’m suggesting to do that.
It’s necessary to establish priorities.
- the project must not die by abandon. So it’s necessary to get a canary (because the old canary is outdated), then a beta.
- the project must start a process of un-kenkendkisation.
- when the step 2) is well advanced, and only then, it will be possible to advance again.
So the canary to come must be a minimal refreshment of the current one, without any unnecessary risk, because it would be a distraction of the task 2), the next priority.
My suggestion is to commit the library updates (Newtonsoft, MegaNZ, Storj), and the zero risk changes. Afaik, there are 3 PR with 0 risk in the pipeline, one change in the file filters (a no code change), and my 2 experimental PR (they are conditioned by a environment variable, if people want to take the risk to try them, they will be aware of the risks)
Now it’s not very inspiring, how to be able to advance ? More slowly that has been the case, but some advance could be possible even while lacking a full blown experienced Duplicati developer. First I could do a quick pass to rule out obviously bad stuff. I don’t remember having seen any in the existing PR, but it could have been because of my poor understanding of course. Second and more importantly, tests by volunteers could be decisive. If say, 3 persons outside of the PR submitter have tested the change and reported that it don’t eat the cat and that even if make Duplicati work better, the PR could be adjudicated in.
Now, to do the testing, one needs a binary unless one is a developer. Obviously doing binaries on demand does not scale. It would go straight to a bus factor of one.
That’s what I have tried to remedy these past days.
Here is the project (that’s a proof of concept):
You’ll notice the presence of a ‘artifact’ link (it’s clickable). It allows to download a zip with Windows installers.
The ‘testdup’ is a copy of the current Duplicati trunk, where I have replaced the automated tests by a test+build process. For any PR entered in the system, binaries are generated (in this test repo, they are configured to last 5 days).
At the moment, only Windows binaries are built, I have not yet gone further.
It should be relatively easy to build Linux (at least Debian) binaries. I just had not yet the time to do it. About Macs, there is the theoretical possibility. First problem for me is that I’m have no clue about Macs and that even if I manage to output Mac binaries, I could not even test them. It’s not in my project to buy a Mac and learn this stuff. Second problem is that AFAIK running untrusted binaries in Apple OS is difficult - and the necessary secrets will not be in Github repo. Not an impossibility, but difficulties for me. I will do what I can in the following days, but no warranty of quick success either.
Ultimately I don’t know if the misery that is Github actions will ever be able to manage every build for Duplicati, but in the short term it don’t matter, only the big three are really useful for enabling volunteers to help in evaluating the PRs. If volunteers don’t help in PR testing even having binaries freely available, it would unfortunately spell the end of Duplicati, unless some genius is stepping up to do the impossible.
Going back to the second priority, the way I have done the Github action to build the Windows binaries matters a lot to me. It’s a script called by the Github action, and crucially it can be run locally. It’s the opposite of the current system, where the build script is published but can’t really be run by anyone but the project author (it’s linked to the computer configuration, and can’t be run on a computer that is not a Mac). Every person wanting to build and test Duplicati should be able to do that on the local computer (Windows, Mac, Linux) exactly like it’s generated on the host server (Github). I’ll pass on Docker since it’s not a dev environment by definition. This should be a major goal to maximize the chances of getting more help. That’s what would be most of my effort if I take the committing task, much more than reviewing PR. Also, the Experimental PR that I have done are part of that: it’s necessary to rid the code of some very difficult to read parts if new help is to be found.
Finally about the PR advocated by @dgileadi : I agree that tests are very important. However, they are less important than the priority two: turning the project into a team effort. And the failing tests are the Appveyor ones, that are AFAIK only used for code coverage. The Github tests are used for the same unit testing. IMO a greater priority would be to make the Github tests work reliably (sometimes the Mac tests fails, sometime the UI tests are failing).
The change by tz-il is very interesting, I have even spent 2 hours in a Appveyor VM to try to make sense of it, but ultimately I would not take any risk in committing a change that has no clear rationale and don’t fix a pressing breakage.
If enough volunteers test it, or if some clear explanation is found, that would be another story then.
It was a bit looong winded, but that’s it. This is my proposal. If you agree, you ask @kenkendk to grant me commit rights. Once it’s done, I’ll commit the necessary PR to refresh the current canary without taking risks.
Basically you are the release manager - since it’s not possible yet to release without him, you ask him to generate the canary when you think it’s time, and then the next beta. But I’m not going to commit all PR that you want, because I think that’s not a priority to enhance the product now, it’s already very good as it is. Only breakages are more important than going forward with a new approach.