Issues to address to get out of beta

Sounds good to use the github features. Is there a current milestone set for coming out of beta?

The subfolders probably are less needed since it only impacts possibly a few backends but OneDrive being a big one. I have is basically done except for updating all backends… so I think I could make subfolders available only to the backends modified for it. Then enable additional backends as they get updated to handle subfolders.

There is now. :slightly_smiling_face: Since the upcoming version numbers are a bit unpredictable, I named this milestone “Upcoming beta”. Ideally, the changes in this milestone would first be made available via canary and experimental releases before appearing in a beta. I think we can attempt to keep track of this via labels (e.g., canary/<version> and experimental/<version>).

I’m new to GitHub milestones, but this plan seems reasonable. I’ve long wanted a lightweight way to focus.

Critical issues with the current canary [] had a different method, and neither is a fancy tracker tool…

Any time there’s tracking, even just GitHub Issues, there’s a certain amount of administration so let me ask whether initial use starts with self-service honor-system as to what’s added? I hope justification is supplied unless it’s hugely obvious. This would be somewhat similar to promoting a feature request except maybe a feature request would have to work harder if the milestone is already full of critical bugs that need attention.

CheckingErrorsForIssue1400 and FoundIssue1400Error test case, analysis, and proposal #3868 is one I’m pointing to as a possible example of how to justify making the Upcoming beta milestone. It’s much-reported (statistics supplied) and reproducible (steps supplied). Those two make it potentially actionable, but for any near (and I hope it’s near) milestone, the bar may be higher. This one is analyzed to code, and has fix PoC. What is doesn’t have is an assignment (which Brave tries to use). I propose that as a “bonus” for an “Add”.

I also propose that regressions from previous Beta found in Canary or Experimental get on list more easily although not automatically. For a given area, quality should increase, even if initial ship is less than perfect.

Regarding what gets added to a milestone, in my experience it’s typically up to the developer(s) who maintain the release. The community can certainly lobby for a particular feature/fix, but ultimately it depends on the availability of developer time and their expectations of what can realistically get done.

I do think that more frequent releases would be helpful (and reduce the urge to keep adding “one more thing”). In fact, for this current beta release cycle (which will hopefully end soon), I would prefer that we begin to focus only on testing and bug fixes, and no longer include any more new features. The more things we try to include, the more testing is needed and the longer the delay. As you already know, there are a lot of changes in the queue that beta users have been waiting a long time for.

@BlueBlock, I just re-read the title of this thread. Are you asking what needs to happen to create a “stable” release? If so, then many of my comments here might be out of place. Some of my comments were with regards to the next beta release, and not a “stable” release. Sorry for any confusion that this may have caused.

I’m thinking of both the next release and the next beta. It would be great to have a narrow list of items for the next beta. And then a set of features targeted for coming out of beta.

It would be great to have frequent betas going out. Weekly for the small bug fixes or even multiple times a week. And then monthly? for bigger fixes or features.

Having such a large span of time between what users are running and the code base can make it difficult to identify problems.

I’m new to the github features for releases. I’m not familiar with how we move issues/PR’s to target different releases like canary. I just need to get an understanding of how to see what issues/PR’s are targeting canary etc. and then how PR’s can get moved between different releases or milestones. I just need to do some youtube education for github features LOL. We must be able to work on long-term features easily… maybe just more branches with a monthly targeted release. And we move PR’s around as we see fit? Not sure.

I wrote a plan on releases a really long time ago here: Discussion: release cycle

But the primary reason it’s not easy to implement any kind of “filtered” release seems to be the difficulty of removing a pull request/feature that was already merged.

Due to this our flow now is strictly “once everything in master is ready then we can upgrade it to next stage”. Which of course creeps on forever.

I think Kenneth has been hesitant to push any more releases exactly because of that overhead and uncertainty about the quality of recent changes.

On the flipside i try to make sure all pull requests have feedback or is merged within a short time to avoid discouraging contributors. A short turnaround time between opening the PR and seeing it in canary is in my mind the only way to keep people involved in the process.

I have to look how other projects handle this. I think your release cycle plan looks good.

I’d seen that plan before, but lost track of where it was. It touches on some of the same points I made in:

Release: (beta) 2019-07-14 which got a bit of discussion then went quiet like @Pectojin topic.

Maybe I should just cut-and-paste my bullet list here? Though user-inspired, it’s only developer-fixable…

I’d love to hear what @BlueBlock finds out, and thanks for raising the issue, even if it’s wandering a bit.

I know we’re going on various topics here but it seems a good place to address some common issues.

To get the next build out, besides any WIP PR’s, it seems like if we can wrap-up the existing PR’s and put a build out? Would we put out a canary build first?

p.s. I am wanting be on .net 462 because I feel it is important to get there for reasons I’ve outlined before. I’ve think I’ve addressed concerns about user impact. So i’m hoping concerns have been addressed and that PR and move. I’m not trying to drag the conversation here, if needed we can take it to the .net 462 PR.

Getting there, but it’s making the scheduling look worse. I took a break yesterday, but just added more. There’s been a wish to get broader developer input on it, so anybody who hasn’t been asked, feel free. Current question is on user benefits of .NET 4.6.2 versus 4.5.2, and proper prep for 4.6.2 if it goes out. “Important” does not directly translate to “Important for users to have right now”, or does it? Discuss…

On the linux mono side, users should at least be at mono v5 if the user followed the installation instructions.

On the Windows side for the user there is little difference between 452 and 462.

I’m not sure how it impacts scheduling if we’re talking about going to 452 versus 462. They bot would take an equal, actually identical, amount of time. And I’m not sure I really understand what impact to scheduling you are seeing.

Duplicati 2 came out in 2014. The mono directions might be from March 2018, per What about a manual?

Based on that, making users maybe advance mono by 27 months (some risk/work) seems like a net loss.

.NET 4.5.2 target might be safe enough to run on old mono without taking time to go heavy-warnings route. Agree that canary testing lots of updated libs would take awhile either way, so still hoping for basic aftp fix.

Detailed in “Update framework to 462” PR. Biggest delay is if the warn-before-requiring-it plan is deployed.

And so was determined as the proper installation of Duplicati on linux.

You seem to be relying on outliers like a user on mono 4.6

How about require users to have mono v5 from 18 months ago? That sure seems like a good compromise doesn’t?

Again, this is all in order to support mono 4.6 and 4.8 users who should be on v5 if not v6 per the installation docs, let alone proper system maintenance, bug fixes, getting to TLS 1.2 etc.

The aftp issue is fixed.

That related to staying on 4.5. Moving to 4.5.2 would have the same requirement as moving to 4.6.2

Some info development cycle that I’ve been reviewing. I’m getting used to git in a group as I’ve dealt almost entirely with enterprise environments.

It looks like there are two trains of thought.

One being trunk-based, but I think we can toss this immediately for open source as it does not provide code review and hence low security.

The second is likely what we might want to follow and it is Git Flow.

GitHub has a nice explanation here at time 3:07:

Maybe. As you can see from its history, it was mostly an individual effort (for which I am very thankful).
Maybe it was said that way to cover the inevitable how-can-I-get-it than the you-must-do-just-this idea.

Without usage reporter data, how do we know outliers? Survey of forum posters is probably skewed.
What’s known is that latest LTS of very popular distro fails by default (I guess) due to its mono 4.6.2. Whatever mono gets chosen, can our OS installers at least be updated before beta for new installs? Existing users just take their chances on updates, and I hope there’s lots of help handling any fallout.
Announcements category could be used to get a heads-up to those registered. Any better channels?

My July 26 attempt to survey the forum got an idea of what’s been mentioned, but not what’s now run:

Forum Google search survey, mostly taken 07/26/2019

17	"RedHat"
2	"Red Hat"
6	"RHEL"
1	"RHEL 6"
0	"Enterprise Linux 6"
0	"RHEL 7"
0	"Enterprise Linux 7"
48	"CentOS"
5	"CentOS 6"
27	"CentOS 7"
39	"Fedora"

1	"SUSE"
16	"openSUSE"
0	"SLES"
0	"Enterprise Server"

70	"Arch" (rolling release)
82	"Manjaro" (rolling release)

219	"Synology"
109	"Synology" "mono"

118	"QNAP"
25	"QNAP" "mono"

4	"Slackware"
97	"Unraid"
22	"Unraid" "mono"

237	"Debian"
23	"Debian 8"
37	"Debian" "Jessie"
39	"Debian 9"
33	"Debian" "Stretch"
350	"Ubuntu"
99	"Ubuntu 16.04"
5	"Ubuntu" "Xenial"
56	"Ubuntu 18.04"
5	"Ubuntu" "Bionic"
34	"Ubuntu" "LTS"
111	"Linux Mint"
5	"Linux Mint 18"
3	"Linux Mint" (Sarah OR Serena OR Sonya OR Sylvia)
7	"Linux Mint 19"
9	"Linux Mint" (Tara OR Tessa OR Tina)
0	"LMDE"

1	"Gentoo"

0	"Mageia"

228	"macOS"
121	"OSX"

Versions for mono was your good site to see what distro ships what, and mono-project seems to support:

Ubuntu 16.04 and 18.04 (high usage, so it’s covered well if user downloads either before or after blowup). Ubuntu 14.04 reached end of standard support this past April, so that gets it off my worries about support.

Debian 9 and 10
Debian “stretch” Release Information

Debian 9.9 was released April 27th, 2019. Debian 9.0 was initially released on June 17th, 2017.

Debian “buster” Release Information

Debian 10.0 was released July 6th, 2019.

CentOS/RHEL 6,7,8 (taking things way back, although note I’m not yet looking up mono versions).

Fedora has 29 and 28

In distros of interest that doesn’t support (if we’re willing to relax documentation note), Slackware/Unraid has mono, and Synology has at least v5.18.0.240-12, so 5.0 looks a little better.

Mono 5.0.0 Release Notes also is where compilers (and much else) changed, so C# 7 is better handled:

Release date: 10 May 2017

C# Compiler

I keep hoping that this forces move to at least mono 4.8, but the distro security teams aren’t biting AFAIK. No new developer commenters seem to be joining in here, so I’ll yield, but expect help on supporting this. Canary will let us fine-tune the response systems some before this goes to beta and affects more users.

as part of a large bundle, rather than small slide-in which would need less testing, so how to get to beta?

FYI I just opened “Google Drive (full access) login” restricted in early 2020 #3875 for possible milestones. Should we add this to Upcoming beta or hope we can push another out before early 2020 brings a crisis?


mono 5.0 is also a nice even number, which is not just an aesthetic thing. .NET Portability Analyzer has:


which maybe will help figure out some API holes, or maybe the tool doesn’t know, and code throws later:

Reference assemblies

The reference assemblies were updated to fully match the .NET 4.6.2 API set. This means that you no longer get compilation errors about missing classes/methods when building against one of the .NET profiles via msbuild / xbuild .

Note that at runtime certain APIs that Mono doesn’t (yet) implement will still throw an exception.

however we’re probably helped by having few-to-no exactly 5.0.0 mono installs around. Most 5.x are later.

Back on the topic of release workflow…I strongly agree with the need for regular (possibly automated) canary releases. Without these, we will often have the issue (like now) where changes sit basically untested by users.

Gitflow Workflow | Atlassian Git Tutorial is also a good presentation.

Git Flow has been widely advertised, but also criticized for giving in to bad practices.

Gitflow is a Poor Branching Model Hack uses rather prickly language to point out the central problem behind the model: not having the full range of test cases to support continuous development. Unit testing is not enough.

Introduction to GitLab Flow does not criticize, but provides a good description of a simple, no-hassle branching model for those who have bothered to build up and equip their continuous deployment practices properly.

Finally, the conversation I tried to start a while ago.

1 Like

I know @verhoek set up an automated system for producing nightly builds that were signed with a different key (on the build server). I completely dropped that one, but I am sure it can be picked up if there is a push for it.

This is what I consider the blocker for a non-beta release. The database rebuild is super slow, and sometimes fails. For a production-ready system this should not happen.

There are also cases where the database suddenly breaks, but maybe that part is actually fixed now.

The shutdown is nice, but Duplicati should be able to handle a hard power off, so that is not a blocker for me. Symlinks and paths are working AFAIK, and some translations are complete.
Subfolders is a new feature, so I would not delay a non-beta for that.

For what it’s worth database rebuilding has gotten REALLY good on my laptop after
I have some periodic issues where the local DB is corrupted when backup over SSH tunnel is timed out (e.g. over a company VPN), so I’ve been rebuilding a good amount of times the last few weeks. I’m still surprised about how quick the rebuilding is without scavenging :smiley: