Issues to address to get out of beta

Duplicati 2 came out in 2014. The mono directions might be from March 2018, per What about a manual?

Based on that, making users maybe advance mono by 27 months (some risk/work) seems like a net loss.

.NET 4.5.2 target might be safe enough to run on old mono without taking time to go heavy-warnings route. Agree that canary testing lots of updated libs would take awhile either way, so still hoping for basic aftp fix.

Detailed in “Update framework to 462” PR. Biggest delay is if the warn-before-requiring-it plan is deployed.

And so was determined as the proper installation of Duplicati on linux.

You seem to be relying on outliers like a user on mono 4.6

How about require users to have mono v5 from 18 months ago? That sure seems like a good compromise doesn’t?

Again, this is all in order to support mono 4.6 and 4.8 users who should be on v5 if not v6 per the installation docs, let alone proper system maintenance, bug fixes, getting to TLS 1.2 etc.

The aftp issue is fixed.

That related to staying on 4.5. Moving to 4.5.2 would have the same requirement as moving to 4.6.2

Some info development cycle that I’ve been reviewing. I’m getting used to git in a group as I’ve dealt almost entirely with enterprise environments.

It looks like there are two trains of thought.

One being trunk-based, but I think we can toss this immediately for open source as it does not provide code review and hence low security.

The second is likely what we might want to follow and it is Git Flow.

GitHub has a nice explanation here at time 3:07:
https://youtu.be/aJnFGMclhU8?t=187

Maybe. As you can see from its history, it was mostly an individual effort (for which I am very thankful).
Maybe it was said that way to cover the inevitable how-can-I-get-it than the you-must-do-just-this idea.

Without usage reporter data, how do we know outliers? Survey of forum posters is probably skewed.
What’s known is that latest LTS of very popular distro fails by default (I guess) due to its mono 4.6.2. Whatever mono gets chosen, can our OS installers at least be updated before beta for new installs? Existing users just take their chances on updates, and I hope there’s lots of help handling any fallout.
Announcements category could be used to get a heads-up to those registered. Any better channels?

My July 26 attempt to survey the forum got an idea of what’s been mentioned, but not what’s now run:

Forum Google search survey, mostly taken 07/26/2019

17	"RedHat"
2	"Red Hat"
6	"RHEL"
1	"RHEL 6"
0	"Enterprise Linux 6"
0	"RHEL 7"
0	"Enterprise Linux 7"
48	"CentOS"
5	"CentOS 6"
27	"CentOS 7"
39	"Fedora"

1	"SUSE"
16	"openSUSE"
0	"SLES"
0	"Enterprise Server"

70	"Arch" (rolling release)
82	"Manjaro" (rolling release)

219	"Synology"
109	"Synology" "mono"

118	"QNAP"
25	"QNAP" "mono"

4	"Slackware"
97	"Unraid"
22	"Unraid" "mono"

237	"Debian"
23	"Debian 8"
37	"Debian" "Jessie"
39	"Debian 9"
33	"Debian" "Stretch"
350	"Ubuntu"
99	"Ubuntu 16.04"
5	"Ubuntu" "Xenial"
56	"Ubuntu 18.04"
5	"Ubuntu" "Bionic"
34	"Ubuntu" "LTS"
111	"Linux Mint"
5	"Linux Mint 18"
3	"Linux Mint" (Sarah OR Serena OR Sonya OR Sylvia)
7	"Linux Mint 19"
9	"Linux Mint" (Tara OR Tessa OR Tina)
0	"LMDE"

1	"Gentoo"

0	"Mageia"

228	"macOS"
121	"OSX"

Versions for mono was your good site to see what distro ships what, and mono-project seems to support:

Ubuntu 16.04 and 18.04 (high usage, so it’s covered well if user downloads either before or after blowup). Ubuntu 14.04 reached end of standard support this past April, so that gets it off my worries about support.

Debian 9 and 10
Debian “stretch” Release Information

Debian 9.9 was released April 27th, 2019. Debian 9.0 was initially released on June 17th, 2017.

Debian “buster” Release Information

Debian 10.0 was released July 6th, 2019.

CentOS/RHEL 6,7,8 (taking things way back, although note I’m not yet looking up mono versions).

Fedora has 29 and 28

In distros of interest that mono-project.com doesn’t support (if we’re willing to relax documentation note), Slackware/Unraid has mono 5.0.1.1, and Synology has at least v5.18.0.240-12, so 5.0 looks a little better.

Mono 5.0.0 Release Notes also is where compilers (and much else) changed, so C# 7 is better handled:

Release date: 10 May 2017

C# Compiler

I keep hoping that this forces move to at least mono 4.8, but the distro security teams aren’t biting AFAIK. No new developer commenters seem to be joining in here, so I’ll yield, but expect help on supporting this. Canary will let us fine-tune the response systems some before this goes to beta and affects more users.

as part of a large bundle, rather than small slide-in which would need less testing, so how to get to beta?

FYI I just opened “Google Drive (full access) login” restricted in early 2020 #3875 for possible milestones. Should we add this to Upcoming beta or hope we can push another out before early 2020 brings a crisis?

EDIT:

mono 5.0 is also a nice even number, which is not just an aesthetic thing. .NET Portability Analyzer has:

image

which maybe will help figure out some API holes, or maybe the tool doesn’t know, and code throws later:

Reference assemblies

The reference assemblies were updated to fully match the .NET 4.6.2 API set. This means that you no longer get compilation errors about missing classes/methods when building against one of the .NET profiles via msbuild / xbuild .

Note that at runtime certain APIs that Mono doesn’t (yet) implement will still throw an exception.

however we’re probably helped by having few-to-no exactly 5.0.0 mono installs around. Most 5.x are later.

Back on the topic of release workflow…I strongly agree with the need for regular (possibly automated) canary releases. Without these, we will often have the issue (like now) where changes sit basically untested by users.

Gitflow Workflow | Atlassian Git Tutorial is also a good presentation.

Git Flow has been widely advertised, but also criticized for giving in to bad practices.

Gitflow is a Poor Branching Model Hack uses rather prickly language to point out the central problem behind the model: not having the full range of test cases to support continuous development. Unit testing is not enough.

Introduction to GitLab Flow does not criticize, but provides a good description of a simple, no-hassle branching model for those who have bothered to build up and equip their continuous deployment practices properly.

Finally, the conversation I tried to start a while ago.

1 Like

I know @verhoek set up an automated system for producing nightly builds that were signed with a different key (on the build server). I completely dropped that one, but I am sure it can be picked up if there is a push for it.

This is what I consider the blocker for a non-beta release. The database rebuild is super slow, and sometimes fails. For a production-ready system this should not happen.

There are also cases where the database suddenly breaks, but maybe that part is actually fixed now.

The shutdown is nice, but Duplicati should be able to handle a hard power off, so that is not a blocker for me. Symlinks and paths are working AFAIK, and some translations are complete.
Subfolders is a new feature, so I would not delay a non-beta for that.

For what it’s worth database rebuilding has gotten REALLY good on my laptop after https://github.com/duplicati/duplicati/pull/3758
I have some periodic issues where the local DB is corrupted when backup over SSH tunnel is timed out (e.g. over a company VPN), so I’ve been rebuilding a good amount of times the last few weeks. I’m still surprised about how quick the rebuilding is without scavenging :smiley:

One of the sticking points I see with releases is back-ends really need to be plugins and not part of a monolithic package. Releases get pushed out because something external breaks in a back-end but other less tested development has been done in the core in the meantime. Then we may have a issues (sometimes not know immediately) and the users of previously broken back-end are caught between a rock in a hard place.

Also, I think we need to look at using milestones (as it was mentioned before). This helps set users’ expectations and lets developers focus.

Also one of the big things I think we need to start using is ‘feature freezes.’ I seems like Duplicati is doing a breakneck speed feature development but reliability and stability are going to the wayside. Feature freezes will help let the dust settle on bug and issues. I know right now, upgrading is always an anxiety for me because backup/recovery are critical aspect of data security. Especially now in the age of ransomware. Having a backup/restore systems that may just not work, is very wearing.

I completely agree on reliability. The file handling and database needs to be pounded so we can identify weak points. I’m thinking more unittests are really needed to identify such issues and to always know if any change impacts reliability. This is something I’m wanting to work on ASAP.

p.s. The only 2 features I’m working on are subfolders and adding parity files. Subfolders is needed for some backends to be usable for large backups. The parity files add a good safety net against bit rot etc. Neither is needed for the next release nor for coming out of beta but I think are important to have.

And as for getting a new canary or beta release out, @kenkendk says there are some blockers on getting it out, but if we’re fixing bugs why not get these fixes out to users? Besides adding more unittests as mentioned, how else will we know when it is ready for a release?

I’m not sure that’s what he was saying if you’re talking about

it seemed like agreement with your proposal, quoted above that. I’ll continue, based on that assumption.

This discussion is at various levels, but there are similarities. For example, if we say damage to backup destination is a big issue, and damage to database is a smaller issue (because backup destination can usually recreate DB), then we can apply same idea to any release variety, e.g. regarding regressions or newfound fixes. Sometimes a big issue simply has no fix available yet, although seeing it on a list might make people focus more – and one doesn’t need to be a coder to help the coders find good test cases.

To decide when to release something, can we look at what’s fixed-but-not-shipped versus desired-fixes? Big-issue fixes after v2.0.4.5-2.0.4.5_beta_2018-11-28 and v2.0.4.22-2.0.4.22_canary_2019-06-30 exist. Discussion is possible (I can nominate some), or we could put them on milestone as already closed so amount of closed issues can be weighed against not-yet-closed when weighing the decision to release.

My nominations for things worth squeezing into next canary, which is probably the path to the next beta:

CheckingErrorsForIssue1400 and FoundIssue1400Error test case, analysis, and proposal #3868 is the database-breaker I’d propose we ensure makes next beta (which may mean catching the next canary).

The somewhat obscure backup-breaker is that FluentFTP needs upgrading. Problems after upgrade to 2.0.4.21 describes how parallel uploads code change tickled a bug (now maybe fixed) in the aftp library.
Update framework to 462 #3844 is needed to update FluentFTP, then someone needs to actually test it.
EDIT: In the forum post, you can see how this regression was a big beta issue once, so let’s get it fixed.

If there are only blockers for a non-beta, what is preventing a canary or beta from going out more often? For the canary is it just automation?

I don’t agree. I think that was just his answer to “out of beta” (who knows) . I think beta can have blockers, also, typically big regressions found in canary or experimental, or maybe a critical fix it’s worth waiting for.

The person who does releases should answer that, but an opinion (hope it’s still valid) on that issue is:

Personally, I think it’s best for canary to have hand-curated documentation on the changes. Takes time.
There’s also some need to keep track of the state of things and not push a release at an unstable spot.
That’s where milestones could help the release-maker. Automated test success is not the total story…

Nightlies might have to make do with GitHub’s list of source changes, unless automation can do better.

That’s a what-if with blockers, I’m wanting to understand if there are currently blockers for putting a new canary or beta out. Like you said, that’s a question for the person handling releases.

For canary, if release documentation is desired that’s fine, then there should be automated nightly builds. IMHO early and often builds should be getting out to testers.

“Blockers” is ambiguous. I proposed two “blockers” as in “suggest-waiting-for-fix”. Any others? Where is:

Fix pausing and stopping after upload #3712

Stopping immediately also works but due to issues of corrupting the database when aborting, the stop now button has been removed until the corruption issues are fixed.

Fix ‘stop after current file’ #3836 is being actively worked, I don’t know if its impact is as bad as above one.

Anybody else have nominations for fixes that you really want to fit in next release? If so, please describe it.

If you mean non-code “blockers” from process viewpoint, we can use more thoughts. I gave one person’s, and his “anything meaningful” part is what I hope one can view by seeing closed items on some milestone. Such items would have been largely pre-chosen as worthy of planning, thus likely worthy of a release note.

In addition to Pectojin Discussion: release cycle, here’s my view from Release: 2.0.4.23 (beta) 2019-07-14:

Those are already broken in the current beta release. If some things are not fixed why hold up a beta release when other items are ready to go? Not everything needs to be fixed to deliver a new release.

I’m not sure I’m understanding the requirement need to be met for a new release. If there are incremental fixes get them out. For issues still not fixed then like other projects, put then in the release notes as “Known Issues”. We certainly will not want to knowingly push out a release that is worse.

I must be missing something. Thanks for your patience.

I likely confused things by referencing an issues post, then asking for talk on two others. Do you mean Issue1400 and friends one which is an old bug but very impactful as seen in impact analysis of issue?
That had so much advance work including analysis and proof-of-concept fix that it seemed worth a fix.

The FTP (Alternative) bug does not exist in the current beta. If you refer to both stop issues, they exist, however I wanted to see if anybody would speak to them as worthy of squeezing in, given, as you say:

This is exactly why I’m proposing we weigh the fixes versus not-yet-fixes. Some fixes just have to wait. Which of the issues (if any) do you think should get fixed before we PM kenkendk to release a canary?

This is exactly why I was lobbying for just an incremental fix for aftp instead of TargetFramework move, however the consensus of those with opinions (including on current thread) seemed to like bigger step.

What I’d really like would be to fix the beta blockers (including the aftp backend fix I keep mentioning

Reasonable idea if we get better at release notes (volunteers?) and can filter 800+ issues to limited list, possibly an “Errata” idea with known workarounds, and other issues at least having to be describable… Describable issues might also be candidates for a milestone. Random breakages are hard to deal with.

Anyone else have input? You want to PM for a canary right now, or if not, then what should be awaited?