Issues to address to get out of beta

I don’t know of any issues currently that should block a canary release. I think we should get a canary out soon.

The pull request from @BlueBlock fix for the very painful Issue1400 went into master an hour ago via @warwickmm and I’m not going to push hard for standalone FluentFTP update, but it’d still be nice if it can squeak in if anyone will do it. The FTP (Alternative) users can (to some extent) move to regular FTP instead, and I’m not certain what fraction are affected anyway, extrapolating from a single canary report.

Most common client OS is Windows, so main server mismatch may be if Linux FTP (e.g. NAS) is used.

@warwickmm would you like to ask kenkendk for canary? I’m not sure how fast he can do that anyway.

Doesn’t this PR fix it or is there another issue? #3866

AFAIK it requires FluentFTP 26.0.0 to pick up this fix, which isn’t exactly what I described, but might work:

OpenRead with EnableThreadSafeDataConnections = true does ASCII when it shouldn’t #428

The 3866 PR does not look like it does anything to fix FTP mistaken conversion between OS line endings. That is entirely as expected, because this was a bug in the .dll that was exposed in Duplicati canary code.

PR Update framework to 462 #3844 updates from 21.0.0 to 27.0.1 but is being proposed to not hit canary. Latest nuget.org FluentFTP version is 27.0.3, and I haven’t looked to see what has changed since 26.0.0.

I see… thanks. That would seem to be an important fix to get out for those impacted.

Yes, although the impact is unknown. Avoiding breaking such users is one reason why 2.0.4.23 was just 2.0.4.5 plus a warning. Of course, the other big reason for not going Beta was Experimental and Canary only had about two weeks of testing, whereas now they’ve had about two months and seem to work OK.

For past FluentFTP background, please see current post here or the framework update discussion here.

I’ve asked ken about creating a canary release. Hopefully he will be able to find some time in his busy schedule.

To add to the request… it sounds like there was an attempt at having an automated release process… if not used for canary it would be great to have it for nightly’s. It’s important to get fixes out to some testers as soon as possible since not all testers want to perform builds to get the latest for testing. It reduces friction for the testers.

Agree. I’m still not set up for builds, but there are times when I’d like a nightly, e.g. if I can talk someone into a FluentFTP update (to see if that can get there before the canary build does…). It’d also be great to have more testers at any release level – preferably people with some expertise and an ability to file good issues.

I’m not certain unit tests are the mechanism, but I don’t know how Duplicati’s focus. Sometimes a “stress” test (thinking of “pounded”) is a different thing, and may require more equipment than a typical person has. There is occasionally talk of shared resources for the team, but who’s going to manage the infrastructure?

By pounded I’m thinking putting Duplicati in less common scenarios but still very important, such as killing the process during backups etc. to try and find any weaknesses in the file and database handling.

1 Like

@ts678 and anyone else, what do you see as the top 3 -5 listed issues on github that need to be fixed?

I have the (personal) signing key, so I need to do the releases. I have set it up so it is just a matter of running the build script (takes ~15min) and then a new build is uploaded.

But sometimes things fail or have changed, so I need to track that down. I have put out a new release based on @warwickmm’s request, but the update to .Net 4.6.2 broke the Windows build server.

I try to get the canaries out regularly, but if I am behind, please send me a PM and I will do it asap.

As I mentioned elsewhere, @verhoek has set up a system for fully automatic nightly builds. It should be a matter of working a few hours to get it running.

There is a fix/workaround for AFTP in the new canary. Is this fix sufficient to go with a new beta? And if so, should we base it on the newest experimental or go “canary → experimental → beta” ?

This “just one more thing” is usually what prevents getting a beta out, but maybe someone (i.e. not me) can make a good argument for what goes into the beta.

You mean for a stable release I assume?

I only have the “repair speed and stability” issue. But it sounds like some of it is fixed already.

Is this something you’ll need to do or can others assist? I suppose if it is a build machine then just yourself.

More of an immediate hit list. I think we’ll try using the github features to track this but it needs to be populated with issues first.

Perhaps most pressing will be to handle the change needed for Google Drive.

A bunch of questions…

Is a notification method needed instead of pushing out a release to get a notification out? There could be a github file for a canary-notice and a release-notice that the web client could be periodically checking.

Is staying with angular1 fine or are there benefits to moving to a newer version?

For the usage-reporter, I have it deployed to my own GAE but I might be missing some steps like the db storage. might there be instructions for setting up the usage-reporter server pieces for development? Or any tips on getting it setup?

For me, the top issues are the ones involving database corruption. I think we’ve made some good progress on this (thanks @ts678 and @BlueBlock), but I believe there are still some issues to take care of. The difficult part is coming up with reproducible examples. @ts678 has done a great job of tracking some of these down.

If we can get steps to reproduce those issues then that is a big step in getting them fixed. Would a tag of “reproducible” help identify those issues that include instructions?

It looks like just one issue is tagged as “fixed”.

Yes, I think judicious use of labels would be helpful.

Ok. Let me know if I’m able to help with it.

I took a stab at looking at the titles of 800+ issues last night. Main one that’s clearly going to bite hard is Google Drive if they actually go through with it. They’re being surprisingly quiet for a looming hard cutoff.

Beyond that, I’m wondering if GitHub might need to be populated (for milestone purposes) with specially created meta-issues representing particular failure modes, trying to find some pattern across all cases.

I spend more time in the forum (partly because there’s a lot of activity and relatively few people covering) meaning I might go about this by looking for patterns there, discounting what we’ve maybe just fixed, and determining if the main issues have any GitHub issues behind them. Often forum users won’t file issues.

But great idea to see what’s painful, and I’m glad we took a whack at the Issue1400 recent reappearance along with at least one route to “Detected non-empty blocksets with no associated blocks!”. Stop now or restarting the host is said in said to be another path, so @BlueBlock idea of continuing work on “stop” code fixes, or deliberately pounding Duplicati a bit with kills or shutdowns might find a reproducible case.