I’m thinking of both the next release and the next beta. It would be great to have a narrow list of items for the next beta. And then a set of features targeted for coming out of beta.
It would be great to have frequent betas going out. Weekly for the small bug fixes or even multiple times a week. And then monthly? for bigger fixes or features.
Having such a large span of time between what users are running and the code base can make it difficult to identify problems.
I’m new to the github features for releases. I’m not familiar with how we move issues/PR’s to target different releases like canary. I just need to get an understanding of how to see what issues/PR’s are targeting canary etc. and then how PR’s can get moved between different releases or milestones. I just need to do some youtube education for github features LOL. We must be able to work on long-term features easily… maybe just more branches with a monthly targeted release. And we move PR’s around as we see fit? Not sure.
But the primary reason it’s not easy to implement any kind of “filtered” release seems to be the difficulty of removing a pull request/feature that was already merged.
Due to this our flow now is strictly “once everything in master is ready then we can upgrade it to next stage”. Which of course creeps on forever.
I think Kenneth has been hesitant to push any more releases exactly because of that overhead and uncertainty about the quality of recent changes.
On the flipside i try to make sure all pull requests have feedback or is merged within a short time to avoid discouraging contributors. A short turnaround time between opening the PR and seeing it in canary is in my mind the only way to keep people involved in the process.
I know we’re going on various topics here but it seems a good place to address some common issues.
To get the next build out, besides any WIP PR’s, it seems like if we can wrap-up the existing PR’s and put a build out? Would we put out a canary build first?
p.s. I am wanting be on .net 462 because I feel it is important to get there for reasons I’ve outlined before. I’ve think I’ve addressed concerns about user impact. So i’m hoping concerns have been addressed and that PR and move. I’m not trying to drag the conversation here, if needed we can take it to the .net 462 PR.
Getting there, but it’s making the scheduling look worse. I took a break yesterday, but just added more. There’s been a wish to get broader developer input on it, so anybody who hasn’t been asked, feel free. Current question is on user benefits of .NET 4.6.2 versus 4.5.2, and proper prep for 4.6.2 if it goes out. “Important” does not directly translate to “Important for users to have right now”, or does it? Discuss…
On the linux mono side, users should at least be at mono v5 if the user followed the installation instructions.
On the Windows side for the user there is little difference between 452 and 462.
I’m not sure how it impacts scheduling if we’re talking about going to 452 versus 462. They bot would take an equal, actually identical, amount of time. And I’m not sure I really understand what impact to scheduling you are seeing.
Duplicati 2 came out in 2014. The mono directions might be from March 2018, per What about a manual?
Based on that, making users maybe advance mono by 27 months (some risk/work) seems like a net loss.
.NET 4.5.2 target might be safe enough to run on old mono without taking time to go heavy-warnings route. Agree that canary testing lots of updated libs would take awhile either way, so still hoping for basic aftp fix.
Detailed in “Update framework to 462” PR. Biggest delay is if the warn-before-requiring-it plan is deployed.
Maybe. As you can see from its history, it was mostly an individual effort (for which I am very thankful).
Maybe it was said that way to cover the inevitable how-can-I-get-it than the you-must-do-just-this idea.
Without usage reporter data, how do we know outliers? Survey of forum posters is probably skewed.
What’s known is that latest LTS of very popular distro fails by default (I guess) due to its mono 4.6.2. Whatever mono gets chosen, can our OS installers at least be updated before beta for new installs? Existing users just take their chances on updates, and I hope there’s lots of help handling any fallout. Announcements category could be used to get a heads-up to those registered. Any better channels?
My July 26 attempt to survey the forum got an idea of what’s been mentioned, but not what’s now run:
Versions for mono was your good site to see what distro ships what, and mono-project seems to support:
Ubuntu 16.04 and 18.04 (high usage, so it’s covered well if user downloads either before or after blowup). Ubuntu 14.04 reached end of standard support this past April, so that gets it off my worries about support.
CentOS/RHEL 6,7,8 (taking things way back, although note I’m not yet looking up mono versions).
Fedora has 29 and 28
In distros of interest that mono-project.com doesn’t support (if we’re willing to relax documentation note), Slackware/Unraid has mono 184.108.40.206, and Synology has at least v220.127.116.11-12, so 5.0 looks a little better.
I keep hoping that this forces move to at least mono 4.8, but the distro security teams aren’t biting AFAIK. No new developer commenters seem to be joining in here, so I’ll yield, but expect help on supporting this. Canary will let us fine-tune the response systems some before this goes to beta and affects more users.
as part of a large bundle, rather than small slide-in which would need less testing, so how to get to beta?
The reference assemblies were updated to fully match the .NET 4.6.2 API set. This means that you no longer get compilation errors about missing classes/methods when building against one of the .NET profiles via msbuild / xbuild .
Note that at runtime certain APIs that Mono doesn’t (yet) implement will still throw an exception.
however we’re probably helped by having few-to-no exactly 5.0.0 mono installs around. Most 5.x are later.
Back on the topic of release workflow…I strongly agree with the need for regular (possibly automated) canary releases. Without these, we will often have the issue (like now) where changes sit basically untested by users.
Git Flow has been widely advertised, but also criticized for giving in to bad practices.
Gitflow is a Poor Branching Model Hack uses rather prickly language to point out the central problem behind the model: not having the full range of test cases to support continuous development. Unit testing is not enough.
Introduction to GitLab Flow does not criticize, but provides a good description of a simple, no-hassle branching model for those who have bothered to build up and equip their continuous deployment practices properly.
I know @verhoek set up an automated system for producing nightly builds that were signed with a different key (on the build server). I completely dropped that one, but I am sure it can be picked up if there is a push for it.
This is what I consider the blocker for a non-beta release. The database rebuild is super slow, and sometimes fails. For a production-ready system this should not happen.
There are also cases where the database suddenly breaks, but maybe that part is actually fixed now.
The shutdown is nice, but Duplicati should be able to handle a hard power off, so that is not a blocker for me. Symlinks and paths are working AFAIK, and some translations are complete.
Subfolders is a new feature, so I would not delay a non-beta for that.
For what it’s worth database rebuilding has gotten REALLY good on my laptop after https://github.com/duplicati/duplicati/pull/3758
I have some periodic issues where the local DB is corrupted when backup over SSH tunnel is timed out (e.g. over a company VPN), so I’ve been rebuilding a good amount of times the last few weeks. I’m still surprised about how quick the rebuilding is without scavenging
One of the sticking points I see with releases is back-ends really need to be plugins and not part of a monolithic package. Releases get pushed out because something external breaks in a back-end but other less tested development has been done in the core in the meantime. Then we may have a issues (sometimes not know immediately) and the users of previously broken back-end are caught between a rock in a hard place.
Also, I think we need to look at using milestones (as it was mentioned before). This helps set users’ expectations and lets developers focus.
Also one of the big things I think we need to start using is ‘feature freezes.’ I seems like Duplicati is doing a breakneck speed feature development but reliability and stability are going to the wayside. Feature freezes will help let the dust settle on bug and issues. I know right now, upgrading is always an anxiety for me because backup/recovery are critical aspect of data security. Especially now in the age of ransomware. Having a backup/restore systems that may just not work, is very wearing.
I completely agree on reliability. The file handling and database needs to be pounded so we can identify weak points. I’m thinking more unittests are really needed to identify such issues and to always know if any change impacts reliability. This is something I’m wanting to work on ASAP.
p.s. The only 2 features I’m working on are subfolders and adding parity files. Subfolders is needed for some backends to be usable for large backups. The parity files add a good safety net against bit rot etc. Neither is needed for the next release nor for coming out of beta but I think are important to have.
And as for getting a new canary or beta release out, @kenkendk says there are some blockers on getting it out, but if we’re fixing bugs why not get these fixes out to users? Besides adding more unittests as mentioned, how else will we know when it is ready for a release?