This topic is where to follow up on a Releases-category topic that turned into developer proposal.
To keep the topic title somewhat unique (there are similar), I’m calling this 2023 (I certainly hope).
Part of the goal is to get things somewhat restarted, prioritizing around risks, returns, and staffing.
On that note, there is always a need for developers, even if it’s just for limited specific assistance.
There is also a need for brave testers, as we are seemingly heading into pre-Canary-level testing.
There is no lab full of hardware configurations and professionals, so this has to be spread around.
More hands make lighter work, and this project is basically too large and complex for one person.
Huge thanks to @gpatel-fr for showing up with tools, a proposal, and an offer. We need others…
To start off on the maybe-boring technical details, how would people classify the dependabot PRs?
Although I think library authors usually try not to break old code on upgrades, sometimes things do disappear or get deprecated prior to that. I’m just not sure how much manual review is appropriate.
Second obscure comment on release channels and history:
The release channel situation has gotten kind of odd. Canary used to be a don’t use-for-production, however has been forced into more wide usage, as Beta pace slowed. When Canary pace slowed, Canary users wound up with unofficial builds such as .dll drops (great proof of the fix for Canary).
Breakages happen from time to time. Quick respins often followed, then quietness returned for test.
After that, an Experimental to prove that infrequent updates actually work, then a wider Beta rollout.
Or so it has been, but can likely be changed if the team wants otherwise. It’s a lot of test levels, with somewhat defined community for each level, and possibly some people in the wrong level right now.
My first reaction to the proposal of the new way is here in the Releases topic, but can more be here?
Input from Duplicati people wanting to shape this or help is needed. This is still a community project.
Thanks again to all who made it, keep it alive, and maybe have long-term dreams (different topic…).
Please, anybody, volunteer in any capacity you can. It’s not just developers. It takes a whole team…
IMO these PRs as relatively safe because they are tested elsewhere, and usually remedy to problems of compatibility, security. Not applying them means to be stuck on olds releases that are not maintained, that’s a risk. Also projects not applying these PR don’t look well, unless there is an explicit note of someone explaining why merging them is breaking things. Finally, if it is breaking things, it will hurt no one’s feeling to revert them, so inclusion in a canary would be my default option.
I have not received any invitation to be part of the project, but this does not concern me at the moment. I have taken a short break to deal with personal matters.
Additional details on my immediate plans: I intend to first complete the task of building the most important binaries, this is the break-up by type:
so actually my priority will be to generate a debian package (a surprise for me I was expecting win64 to lead and not at all the zip file - what it is used for I wonder - Docker maybe ?). So .deb and .zip will be first - should be the easiest as well. Also having some kind of suffix for the name would be nice to identify the package.
Well yes of course. Actually getting something out of the gates would be nice to get people to help.
It looks like current dependabot PRs are all the same because dependabot isn’t really configured.
This might tone it down to wanting Newtonsoft.Json 12.0.2 to move to 13.0.2 for security reasons. https://github.com/JamesNK/Newtonsoft.Json/releases isn’t flagging any 13.0.2 concerns to me…
Version updates for non-security reasons is when the concerns from me and others kick in further.
Personally, I would prioritize catching up on PRs over doing a maybe-it-helps dependency refresh.
However, it will not accelerate PR handling if I don’t merge these PRs. What time I win with not merging these PR, I will use on other stuff. I think I have made clear that I am not keen on merging features, only on clear breakages.
However again, the 4 year lapse between 12 and 13 version has been used by the Newtonsoft project for bug fixes. It’s not a good idea IMO to forgo these bugfixes because they are not explicitly marked ‘security’. In 4 years, it don’t happen that the code has not been made more secure, unless the project is not used at all. That’s hardly the case for this project:
And if you browse the downloaded versions (600 K per day - it’s a trusted project if there is one):
take a hard look at this page. I know that you are observant I’m sure that you will not miss the interesting detail
Enough about Newtonsoft from me, next to plans.
Now that I am a Duplicati developer, I have a small change of mind about the planned release.
In my opinion, the current canary has been tested more than enough, so long indeed that several problems have appeared because some stuff has been obsoleted. The plan in my idea should be to refresh the canary to fix these problems, publish it, then after a short time (one month) go to beta (finally).
There is no need to add new risks to this (very near I hope) beta by merging left and right. The more so because I’m not ready to evaluate complex changes.
So the breakages to handle IMO:
there is a missing fix in the committed stuff post last canary (104) by Albertony
It has been asked for and here I am for the merge since it’s a library .
What I wanted initially is to add to bug fixes a small improvement as a proof of concept that Duplicati could go forward (even slightly) by merging my perf experimental code.
But having seen the involvement of posters on this forum about the missing tray icon on OSX, I think that it’s better to try to get this in, because first it’s currently broken, so risk of regression is minimal as it can’t be worse than not working at all, second it’s really an improvement involving added code so it’s as good as a proof of concept.
So I plan to review it as an exceptional case - even if I don’t plan to do this in the following months except in case of breakages because I expect to do other things with Duplicati (*), I’m going to test if it regress on other platforms (at least Windows and Linux - done), and if we have a few confirmations that everything is all right on Mac by posters, I’ll commit it instead of my ‘performance’ PRs.
Also (last reason but not least), if I advocate team work I should avoid committing my own changes. In well managed projects, one don’t commit own stuff, another contributor reviews the PRs and commits - even if it’s sometimes formal, it’s better than not caring at all.
Finally for the immediate plans: please give me your final answer about Newtonsoft, I’ll create a synthetic PR regrouping all the planned changes (including or not Newtonsoft), and push it to my test repo, so that binaries could be available to test the next canary (105). As you’ll notice, the test repo now generates binaries for all 3 main platforms, although I can’t test MacOS binaries:
But I have already tested the generated installers for Linux and Windows and they seem to work all right.
(*): getting better at understanding Duplicati code, understanding the OAuth server, preparing for Kestrel…there are quite a lot of things to think of. It’s just the only thing possible to do the minimum to keep the boat afloat, unless there were half a dozen more developers.
I have no concerns on the Newtonsoft security fix and bonus improvements. Please merge it.
This actually might take us up to the Beta, but after that decide what to do with other libraries.
Or maybe your “update other libraries” means a general update, with or without dependabot?
While I’d love any fixes in a new System.Data.SQLite, our encryption got removed, so issues.
For PRs from specific authors, I would suggest working with them to check status. I see some
marked draft, and for others a lot of time has passed, so maybe the PR would merit an update.
The one most up to date is mega.nz, as you just had it tested. uplink.NET PR wants 2.9.2710
however that’s now about 24 versions (wow) out of date. Take advantage of contributor advice.
Authors will probably be thrilled that their aged PR is getting love, might have test thoughts, etc.
The extensions change looks safe, the macOS fix seems to be testing well, and seems worth it.
Generally anything which recovers something that’s totally broken seems a good candidate if it
carries little risk of making other things worse. macOS I think still works if one just browses to it.
As discussed, the fix is is a different area. We’re also still working on finding pre-release testers.
Testers can also be lined up to try the Canary when it comes out. This doesn’t need coding skill.
Maybe at some point another volunteer can help with solving logistics around fixing some issue.
They could also work with PR authors, but actually it’s not so hard, especially if they’re on forum.
@albertony and @TopperDEL would you like to comment on PR commit plan here (or GitHub)?
I’m happy to assist in small areas. I use this tool enough that I want to give back where I can. I’ve got a couple decades of stack development and network management But I’m not terribly proficient in C#. So if you ever run into a situation that feels like it needs some mundane technical work, just shoot me a message.
The immediate need will be in testing the (soon to come I hope) binaries pre-canary. After that, it depend on what you mean by ‘stack development’. If it’s javascript, there is some in Duplicati and everything needs attention - all software is obsoleted as fast as possible, incompatibilities are creeping everywhere, and users want always better features to boot
@ts678 asked me to join the forum and discuss the release planning. From my reading so far, the step of successfully releasing another version after March has been made. Since that release, no pull requests have been merged and I don’t see a path to a beta type release. There has been discussion about a general plan to make a release and make sure it’s stable. But stability looks like it’s currently measured in a way that the project will continue to stall unless there are sufficient testers and verifiers. I’m not convinced the project is in a position to continue to wait for everything to be verified by 3 testers. I think with the limited resources, things will stay slow and look like the project is dead. I am forming a view that it’s a support active project, but code wise it’s dead because the perceived risk of change and failure is too high to make code changes. There are 1 line config pull requests from 2 years ago that aren’t merged. If they can’t be reviewed and either accepted or denied I am unable to imagine how progress will happen.
As I don’t have the background of @ts678, or @gpatel-fr I can’t see what’s in your heads about moving from the current spot to a release. I would hope for some pull request and issue labelling to include, or exclude them from the upcoming stable. So focus can be had on reviewing and merging what are considered the lowest risk items. That will reduce the mental load when looking at github’s issue tracker.
If release of stable is the goal, I don’t understand the addition of experimental items into the code. What that says to me, is we aren’t serious about a stable release or we aren’t really confident in what we have and aren’t confident in the code we are making. Which results in the recipe of “don’t touch anything”. And fear continues to stall the project.
Questions
What is the measure of stable enough in a way that someone can objective look at it and say “yes”?
Who will categorise issues as requiring action before a stable (really a rework of question 1)?
Who can do commits?
What is the process to get a pull request merged? (timing, number of reviewers, testing expectations) And it has to be achievable, not in the need more people basket. What can be done with what is available.
What is the timeline on the next release, how is that chosen? (eg, no new issue that meet the fix for stable, release 4 weeks if there is a change.)
I have lots of other questions, I just can’t articulate them in a helpful way as there is so much context missing.
Welcome to the forum @mr-russ, and I hope you’ll find this a suitable place to discuss developer work. There’s also a Staff category which used to be used more, but this seems like a reasonable spot to talk.
Thanks for writing up a great set of questions, most of which need better answers. Can you help there?
Although I’m a bit of a stand-in for @gpatel-fr who is interim maintainer, I can give some historical info, while adding that in my view a whole lot of process change is possible if someone has a good idea and can come up with workable ways that fit the various parties. Sometimes lightest is best, sometimes not.
This isn’t a huge organization, but I think its success and its future goals may call for some adjustments. Because this is still a volunteer effort, I hope that the actual volunteers can help some with the shaping.
Historically, Duplicati began as one developer, expanded some, contracted, and is now trying to rebuild, which won’t be instant, but does seem to offer a lot of flexibility. In that sense, good time to be talking…
This is an opinion piece, and I’m not seeking to step on toes. Any volunteer also has other things going. People have free times and busy times, and we should plan for flexibility as the resources come and go, which isn’t at all the same as not planning. I think getting a bit more specific and deliberate may help us.
Not quite success, so I’ve begun selected work with you while the new interim maintainer is fixing Macs and missing steps (real stairs… – I hope you’re OK). You can see some of the post-release fallout there, and it looks like another try at Canary will happen if we can get some more people with Macs to pre-test.
I’m not a GitHub expert, but I think I explained (maybe on some PR or Issue post, which is a bad place), how having a single master branch (from which Canary emerges then hopefully lasts into a Beta) locks merges and releases somewhat, leading to possible slow spots. With few people this matters less since someone who’s busy fixing a Canary might want to focus on that. With more people, it causes problems, however it might be useful to plan for a future that flows better if it can fit what can be varying resources.
I think having a release to 65 million backups per year with a cut from master makes master worse for riskier development, and I’m pretty sure that the mega-changes such as leaving .NET Framework give overwhelming push to change the build and release model. If you have a workable idea, please speak.
There’s been much said before, some going nowhere. There are new people now. We do have to keep within what the original developer can release, as he has decided to keep that portion. Statement was:
Hi, thanks for the concern and raising the issue. As it stands now, I have very limited time and I am not able to fix issues and review PRs. I maintain the servers and other background stuff that needs to be running.
I can commit to doing the releases (building, signing, etc) if someone wants to take on the other roles.
and I basically read this as much of the project activity being delegated downwards, so let’s sort it out.
Lack of its original developer for some years has certainly added to lack of confidence. The answer is? Basically, how to proceed confidently given the situation? I don’t think we can, fully, but processes and working agreements may help, the who-does-what-when sort of talk that’s being hashed out here a bit.
Setting up GitHub to contain risks better would seem a wise move, but I’m not the one that can do that, and I’m not the one who would have to develop for that or the one who might move some fixes around.
There is one developer pursuing further goals such as getting off .NET Framework, adding Kestrel, etc. You’ve visited there today, I see. While I’d prefer the proposal-to-proceeed from someone who is more familiar with GitHub, I’ve seen plenty of organizations have multiple branches, and sometimes messes.
From what I can see, the other varying number of core developers work on periodic releases, hopefully progressively more stable, with an eventual admittedly vague goal of finally calling Stable but imperfect. Maybe we should change the name. Nothing’s ever perfect, but IMO Duplicati still breaks a bit too often and is very laborious to repair. I tend to prioritize reliability of backups rather high, but there’s also more.
I don’t want to pull up all the discussions on whether to change the model, but here’s an example of that:
Upcoming beta has been used, and I think other trackers were used long ago, and now there’s this topic, which began at a lighter weight of Canary planning. Have to start somewhere, and it’s basically a re-start because the original author has had no time, and the person who was helping for awhile hasn’t in awhile.
If people like labels better than GitHub milestone, that’s probably possible to arrange. The exclude could probably be the DO NOT MERGE one for PRs, but I’m not sure I see a similar one for Issues to not pursue.
From what I hear, many projects have lots of old mystery issues that arguably could use some cleanup. Duplicati looks that way. I usually sort by last-updated to see what’s attracting any activity. If you waded deep into the issues, you might see that I went through them once, but just closing by idle time may be one approach. The code base is vastly different than it was. Anything still happening will get new issue.
The labels that exist can reduce mental load a bit too. I tend to focus on backup corruption a lot, and grow annoyed when reproducible ones get ignored, but that’s a recent thing as resources have declined. Some of these backup corruptions need an incredible amount of effort to identify the reproducible steps, however without that, it’s too hard to figure out what to fix. I suppose working on those preparation steps is certainly one very useful thing to do if merges are going slow at particular times, for whatever reasons. That’s often something one person can work on, and there is no lack of issues seeking some attention.
I’m a little more reluctant to encourage future design talks while monitoring a release, but that’s possible.
What does this refer to? If Mac fix had worked, there would typically be a break for some Canary testing from the frequently-upgrading risk-takers (this has changed as things slowed though), and Experimental serving as Beta release candidate for slow upgraders as a test that slow upgrades work, then new Beta.
That’s the historical path, and the intent of this Canary was to become Beta if nothing terrible showed up. Sometimes it’s less clear which Canary should go. Sometimes there’s a plan in advance for the Beta, to lighten up on commits to master. Risky commits are avoided at end of release cycle, but it needs plans. The published plan for this Canary was the below:
This time, it was so clear that a Beta was long overdue that it was clearer when to Beta. Answer: ASAP.
I’m told by the person with the stat that there are few people on Experimental. Additionally, new practice of very unofficial right-from-the-developer release before Canary has begun, so a lot may be in flux now.
For awhile, progress wasn’t happening, but it’s slowly coming back after some resource limitation gaps. One issue that I think the prior non-primary maintainer had (I think) was not wanting to commit PRs not understood. Not everyone knows every area. I wish we had full coverage, but I have a feeling we have gaps. I think we now have two C# and SQL devs (including you), but I’m not sure if there’s a GUI dev…
With the continued lack of time of original author, do you have proposals for how to do the best we can? Same applies to any issue. Volunteers are rare, and may have time and skill limits. What’s a good plan?
What does this refer to? I talked about pragmas in some other odd place. By name, it’s not experimental any more, but it’s not (yet) in the manual. I think this was the only survivor from pre-Canary experimental code, and the rest were specifically not added because of risk vs. reward. Allowing pragmas is aimed at performance not stability. People want everything. We both posted to a new performance issue recently.
It made it an easier decision that the safer performance experiment of adding cache worked the best, or we’d be facing a hard decision of whether the complex changes involving SQL were safe enough to ship.
One thing that can definitely reduce stability is if someone breaks the code with the pragmas, but I hope @Sami_Lehtinen (who is very good at this) knows that there are limits and can help find safe speedups:
These are all good discussion points. Any “who” question can use a volunteer. You could be a new one.
We have now moved to more of a release-on-request model, after having agreed on who’s doing them. There’s still an availability question. Previous work by original author seemed to just show up, in bursts.
Other than that, some decisions have been what I’d call rough consensus (to steal a phrase from IETF). The extended phrase adds “and running code”. Actual follow-through sometimes works, sometimes not.
If you mean stable enough to ship some release, generally it’s from the above process, especially using people who are watching for problems on GitHub and the forum. If you specifically meant when to go to Stable channel, see other comments, but that’s a harder question that historically has felt rather distant, however we’ve made much progress on stability. Anyone who thinks its bad now missed the early days.
I’ve seen organizations prioritize bugs based on guidelines, and have limits that any release must meet. This does tend to focus attention, knowing what’s needed to clear the bar. Sometimes help is called in… Agile projects might use a “Definition of Done (DoD)” concept that might also be useful. It’s collaborative. Because GitHub is a rather light bug tracker compared to specialists, that gets to a label/milestone need.
I don’t spend much time in PRs myself. Although I’m a developer, I’m new to many things Duplicati uses. My impression is that the person considering commit conducts PR reviews without specific rules on that. Duplicati has a few devs who are not super active but support their original code. There’s not any formal agreement that I know of, but being able to call on subject matters experts sometimes may help things…
I don’t know what testing was previously done on the pipeline to a Canary. I mentioned automated tests, which I believe happen even at PR submission time, but they’re generally not regarded as being able to catch all. Maintenance and continued development of these tests is another question. I don’t know them.
I’ve seen organizations do test different ways, generally with some sort of a QA department, but manual versus automated tests (and who wrote/ran them) varied. Duplicati has not found purely-test volunteers, beyond people willing to take a Canary and let us know whether it works, over some period of their uses.
What this means IMO is that before that, testing is left to developers either at their PR or on test release. This small group doesn’t have department boundaries, and working on what needs it seems reasonable.
As we’ve both been saying, I think, the question may be what can be done better with what’s available?
I hope there’s more now, so ask away, however the answer may be it needs someone to find an answer.
Thanks for the long reply. There is lots of good comments in there. I’m going to stick with one for the minute.
I can try to help in a few ways. I don’t know how to move the ball forward. I’ve submitted some pull requests, which you have made comments on. I’ve made comments on other pull requests. That is the extent of my ability to engage in code at the moment as committers/reviewers seem a very limited list.
What would I like to be able to do now;
Go through all pull requests and issues to label them as part of beta review. More could be removed for beta if they are too risky, but the more safe ones that can go in, the better. Also close older issues and group bugs together into less issues eg; (bad performance because of SQL is many issues).
Somehow get relevant and ready items merged. (see what I need to succeed)
Be able to commit even variable names just and other small fixes to improve readability. Names are very important to maintainability, and just now I saw blocksize as a variable name when really it’s volumesize. Fixing that would allow maintainability improvements. I’d love to rename all the SQL tables as well.
Add the Nunit test adapter to the Test project so test will work is dotnet and visual studio.
What would I like to plan.
Determine the real complexity of porting to .net6/netstandard20. And by that I mean which things are low risk. I think 48-50 of the 53 projects could be updated to netstandard with only csproj and administrative change. Website, and possible services are hard. But libraries and command line are easier.
Agree release plan and how to maintain a stable and development. It has been said that things show as we approach stable. After release it’s the same. When does master take larger changes, and how do bugs get fixed in stable. Without a plan, a stable release will have the same development challenges. If my comment is agreed as an issue, it’s agree when to branch stable 2.0 branch that is bugfix only.
What am I missing to achieve those goals.
Someone who can and will commit changes.
Someone who can and will review, give feedback and approve pull requests.
Access to manage issues that allows tagging and closing to make them work.
It would be helpful to know who is around that can do those above tasks.
Who are the people who can agree to an implement anything that’s agreed.
The bad news on the missing items is that it’s probably gpatel-fr who is not only somewhat occupied on the macOS fix change of design, but is also a bit limited by his accident, so we’ll hear less than is usual.
Specifically that’s possibly 1, 2, and 5. I can do some of 3, as I have Triage role, but it can’t make labels. Write role can. Some tasks, such as granting privileges, need Admin I think, meaning the project owner who probably also has some interest in how releases get done, being the person who does the releases.
I have little to say about the porting to a newer .NET, but we do now have actual experience to inform us. Along with that comes some history with specific directions, so any change will probably have an impact.
Readability improvements sound good to me. I think I saw you once suggest a clearer way to show SQL too, which is an area I have enough trouble with even with nice pretty-printed SQL. Run-together is hard.
Changing table names should keep in mind that new Duplicati has to handle and convert old databases, however I think there’s a fairly nice mechanism that does whatever updates are necessary to be current. The exception to that is in the standalone restore tools. There are Python and Rust ones to be bothered.
Just trying to make some sense of all the issues and decide on a label scheme seems a pretty large job. You’ll certainly get a great sense of what’s breaking or getting complaints by reading through the issues, although some old issues have surely been fixed, or the author has lost all interest and won’t/can’t help. Referencing an issue from the forum puts a note in it, and I do that sometimes to push their activity date.
You may also find that, in addition to reliability, I have some aimed at making Duplicati more supportable. Heavy support loads can tie up volunteers who could be doing other things, but it is a good way to learn.
Do milestones appeal to you? We’ve used them as trackers before, but that doesn’t mean we must now. One option would be to use both – labels to help get a sense of the vastness, milestones after decisions.
Yes, this supports my belief that it’s difficult to contribute as it’s moved from one developer to another, and there isn’t really a team as yet. Having key roles unavailable for administration and support of newcomers stalls progression as interest gets lower. I’ve probably only got a couple of weeks left before it will be too hard and I’ll find something else to do.
I’ve done a bunch of different development models over time. I’ve worked on Moodle which has millions of installs and billions of users. I’ve worked on agile company environments with varying definitions of done. I’ve worked on gaming engines. Clarity of process has always been the most useful to engage with, as it’s easy to work out how to win at things.
There don’t appear to be goals beyond broad stable, easier to maintain. I like milestones/epics that are scale bound and deliver something with a reasonably small set of change. It makes seen progress easier and allow agility when team capability changes. It also allows smaller changes to support it. .net6 is a perfect example. It’s a milestone and each project conversion is an activity. We can be testing the code more widely by each project that is built to netstandard20, or net6 even if not all done. At the moment, it’s a monolith change that runs too long and I believe those are harder to manage and achieve success with.
Having said that, I think there are still too many issues, and the outcome of those issues is probably to have some milestones that address those groups of issues. examples;
Database performance is able to rebuild a 100GB backup in X time.
Restore of a single file out of 100 snapshots takes less than 1 minute.
Those are some examples that have measurable success and if we regress, that can be addressed. Milestones of “better” never finish. So to create them well they should be SMART goals as they are called.
Overall clarity of how to do something and the ability to get it delivered are the first two key things. Those are what’s hard right now.
I am and will continue to do what I can to support you, but as the potential second dev member on the restart attempt, you’re starting at a bad time, with a health outage of the other dev, plus a bad release.
We will see how gpatel-fr progresses. Weeks should be enough to see improvement on both fronts.
If administration refers to highest level jobs, I think there have been roles or hierarchy a long time, and this seems reasonable on the surface, but causes single point of potential failure (truck factor) or wait.
I would prefer to have a more specific talk about labels, if that is a big focus. Even though Triage role is able to change them, I’ve done no big change, and individual uses have been mostly on issues worked.
On closing issues, gpatel-fr has already asked on a now-244-post topic begun in 2018 then brought back from the dead by a similar new issue that was tacked on, that idle topics be closed to stop that…
I talked about issues there and also with you, and I expect your review will find many non-helpful ones. Unfortunately GitHub (or at least our setup) allows continued posts to closed issues, and people do so. The issue template does ask “I have searched open and closed issues for duplicates.” but can change. There might not be a pull request template, but I think sometimes those also help set the expectations. Working on defining those in a briefly explainable way would also help produce a useful longer version.
Typically here (and this is typical in open source as I understand it – correct me if I’m wrong), there is a progression of role based on things like time and merit, which makes it different than companies where there is extensive pre-vetting. I certainly wasn’t invited instantly, and I declined Write because it didn’t seem necessary, while adding risks of newbie me. My attitude was probably influenced by a co-worker who on day one at a previous company had deleted their source repo by accident. Don’t let that be me. Looking way above my level at the GitHub role summary, it looks like only Admin can make huge mess.
You have seen a lot, and the upside of lack of process here is that you can play a large role in shaping. After that, publish it. There is plenty of good advice on helping start people. Duplicati could do far more, subject to volunteers, as always. Some things are harder than others. A big design doc may be difficult, however there’s also the point of view that the code exists, and is going to be current as things change.
I’ve posted references to crumbs of info that I know about, when people ask. I can certainly do it again, however better would be if someone could make a presentable version and get it reviewed and posted, which requires volunteers with different skills. If I point to my guess, you’d certainly find out if I guessed right or wrong after you look at the code awhile, sort of iterate until “OK”, and next person is a next test.
I’m pretty open to trying some other ways. As you note, and I agree, we’re a bit loose, but not totally so. Focus is usually on non-performance bug fixes. This topic is an example. Goal was set, goal was done, ignoring the fact that macOS didn’t quite work as expected despite some pre-Canary testing attempts…
Performance metrics would be very hardware dependent, but this points to the lack of Duplicati facilities that aren’t either personally owned or maybe rented from a cloud and possibly shared with some others. Duplicati isn’t rich enough to pay developers, but I’d like to see if money can be usefully used somehow.
Any shared project infrastructure raises question of administration. Personal is easy but not very flexible. FWIW I’m sometimes using the PC I’m typing on for performance tests or expected hardships like killing Duplicati at random times which can break backup, and some issues are filed that I’d prefer to see fixed.
Having a dedicated system which at various times is measuring, challenging using expected challenges, or some other worthwhile task might be nice. I do have a PC to offer, semi-retired because it’s very slow.
Metrics also raise the question (which might be good) of whether small incremental changes can nudge their way to the ultimate goal (whatever that is…) or if goals will push incremental higher until impossible without the extensive design change one was hoping to avoid. But at least the devs will get experienced.
I guess that’s the “Specific” “Measureable” and “Achievable”. “Relevant” is up for debate but could likely be inferred from looking in forum and issues to see where it hurts and how badly. Maybe also apply our own judgments based on things we see coming, e.g. .NET Framework works now, but not in far futures.
Your actual-performance examples remind me of a thought I had on how to create a metric for “Stable”. Bug counts can be worked down by combining bugs or tweaking priorities, however there’s also results seen from the user base in terms of support requests, maybe categorized. Rough metric isn’t very hard, and is kind of what’s done by polling people to see if Canary seems no worse than usual, so goes Beta.
Although time may bring gpatel-fr back, what more can I do to help there, while attempting to fill in?
Resources aren’t something I can solve, although in the long run I’ve laid out some options to improve getting people started. I’m also trying to bootstrap that with a start from me, then you continue booting.
Although you want some better process for the people, and I tend to agree, you don’t strike me as one favoring mega-corp levels of process either, so it will take some guessing as to what the right level is…
Thanks for your interest and ideas. I’m sort of winging it here, but I think you could get some of them in. Hopefully someone else will shoot me a PM or something if I’m getting too far from what they would do.
Actually reading through the topic, it looks like goal setting wasn’t done in this topic. Probably in PMs, meaning forum PMs, not product managers or project managers, as large companies might formalize.
To support my claims that it’s not entirely plan-free, there’s the Issues to address to get out of beta mentioned above, and I see two staff discussions. This Canary might have been mostly dev-led, basically doing what clearly were some important and worthwhile changes. After that it gets less clear, and one may notice that GitHub doesn’t get much in the Assignee field. Should that get more formal? There are other tools to plan and track things, and some do a better job. I don’t want too many though.
In trying to do some web research on the right level of process for a very small team, feedback is that experiments may be required. Teams differ, individuals differ, etc. Do any actually like hard deadlines? Unlike some big-company situations where things need to march in step, things are simpler here, and volunteers who favor progress may still prefer progress without pressure or anything excessively rigid.
Having more or less (a respin is needed) finished what one might view as a sprint, there’s a what-next question without a groomed backlog to pull from, but it’s also real hazy what the team velocity is. Even what areas the team has expertise in isn’t known, e.g. anyone do AngularJS, etc.? If not, that limits us.
Having one dev plan for themselves is easy. With two, can we stick to that? Are any efforts multi-area? Unfortunately we can’t hire people to fill skills gaps, but maybe we can get volunteers for limited action.
Moving back to labels, I see now why the red backup corruption label came to my mind immediately.
These are typically regarded as high severity, and there’s a separate argument over that versus priority, meaning work priority by the planners, and probably more in flux as things change. A severity is stabler.
There doesn’t appear to be a formal label plan for either severity or priority. There are several for areas, which might help understand which ones can be tackled with existing skills, or what learning is required.
So there’s my next increment of process chat. I think we could use more, but I’d prefer to keep it simple.
I’m really struggling to see how this goes well. I see no realistic way to get open pull requests merged unless @gpatel-fr does it all, and they are busy. I can’t see how long everything will be stalled while we work out what’s okay to merge, and how we call stable. Success by newcomers and users is measured in changes and releases happening. And only those with the power to make changes really have the power to make decisions. Everyone else you and me included are still just offering our opinions. The project overall still looks very stagnant even though there is a lot of discussion. I don’t think I have enough knowledge (non-code stuff) to be able to make the determinations of what’s in 2.0, and what’s going in the next version. How can we make it so more than one person can commit changes?
The repeated calls to try and make beta indicate to me we really want a 2.0 branch that tracks what the required fixes are to release. I know there is pushback about that. But if we think forward to the release of 2.0. How is it going to be supported without a branch? If we keep 2.0 changes on master after release, there will never be a time to make changes that aren’t “safe”. I can’t see how we don’t end up with a stable branch (2.0) and a master at some point. Why not now?
I would like it to be now, so all the experimental code, larger fixes, and parts of .net6 migration can be merged. Can be used and continuous improvements happen. As an example, Improved restore performance by snamds · Pull Request #4785 · duplicati/duplicati · GitHub was submitted nearly 12 months ago, and is probably the fix for a number of the slow restore reports. It’s not even been reviewed, let alone having a strategy for how it might make it into the codebase within the next year. The .net6 migration is another example. Where even the developer feels like it’s a fork and has been going for 2 years. I do not believe that’s how you create a team environment, or get things done. Contributors need to feel like there effort is being recognised and action is being taken. There is currently a long history of the opposite. So anybody who comes along to this project looks at if they will do something and thinks, if I fix something or write some code it’s going nowhere, so I’ll invest my energy somewhere else.
Regardless of if we are trying for beta, adding S3 endpoints is something that is very very low risk. I find it increasingly difficult to do further work on labels and categorising when simple things don’t move and I don’t see real action. It feels very much like a lets talk about it an nothing happens scenario.
How can actual change happen, not just discussion about it? Is it wait until @gpatel-fr’s foot is better?
I’m going to amend that to say “successful releases” (and changes). Bad ones are bad.
Figuring out what’s okay to merge (you could help) seems important before doing merge.
The “how long” of it is hard to estimate. Basically, how do we develop assurance it’s OK?
Some time may be needed for people to develop some familiarity with the code, and that
could be a scheduled activity if you like it that way. Short of that, just toss it in? I hope not.
I don’t know what the forecast for that changing is, but I think there’s lots to do meanwhile.
Your label and issue plan wasn’t too clear, but that seems like it could take multiple weeks.
Want to pitch the next increment of important fixes to target? Please do. Someone should.
You could solve a backup corruption issue, and maybe that gets a good chance to go fast.
If it’s a really large complex change and you explain it well, that’s a chance to show off too.
You could demonstrate knowledge of the design and code by posting deep dives of issues.
In actual terms of fixed resources, you probably mean you. I’d say get prepared to do that.
Demonstrating the code-level expertise to commit (or not) code that’s proposed is needed.
There are some writeups, maybe some help around, but ultimately this is a learning curve.
Whether or not a PR is something we want now in terms of the product is more subjective.
In theory I guess we could publish a groomed backlog and invite people in, but it would be
stability-issue-fix-heavy, and some of those are pretty obvious from the labels or reading…
There was also talk above that maybe after Beta, Canary could go back to being a big risk,
however once bad code gets in, it can introduce subtle errors that are super difficult to find.
Given that unit tests don’t seem to spot severe known issues, I suspect they’re inadequate.
Then stay on 2.0 and think Canary and Beta releases. Big jump to new .NET is further away.
You don’t need to plot out the entire future of the project, although coordination might help…
I think I said that shipping .NET version (probably fairly raw for awhile) might force branching.
We use release branches, e.g. https://github.com/duplicati/duplicati/tree/release/2.0.6.3_beta
however they don’t tend to stay active long. Maybe it makes emergency fixes easier. Unsure.
One factor may be that there’s no specific well-thought-out proposal from people expert in Git.
I think someone who knows Git well and is willing to do some research on its general wisdom
should make a specific proposal, including which way the changes flow and who moves them.
Right now my guess is that people’s private repos occasionally sync from the duplicati master.
I found an official https://github.com/duplicati/duplicati/tree/netstandard that’s not in use. Why?
@mnaiman@tsuckow do you know branch history? Any thoughts of how things will play out?
I don’t think I’ve heard a proposal of how to use GitHub in the best way to get to the new .NET.
There’s also a release issue. I had thought we’d run both release trains awhile, but am unsure.