The pull request from @BlueBlock fix for the very painful Issue1400 went into master an hour ago via @warwickmm and I’m not going to push hard for standalone FluentFTP update, but it’d still be nice if it can squeak in if anyone will do it. The FTP (Alternative) users can (to some extent) move to regular FTP instead, and I’m not certain what fraction are affected anyway, extrapolating from a single canary report.
Most common client OS is Windows, so main server mismatch may be if Linux FTP (e.g. NAS) is used.
@warwickmm would you like to ask kenkendk for canary? I’m not sure how fast he can do that anyway.
The 3866 PR does not look like it does anything to fix FTP mistaken conversion between OS line endings. That is entirely as expected, because this was a bug in the .dll that was exposed in Duplicati canary code.
PR Update framework to 462 #3844 updates from 21.0.0 to 27.0.1 but is being proposed to not hit canary. Latest nuget.org FluentFTP version is 27.0.3, and I haven’t looked to see what has changed since 26.0.0.
Yes, although the impact is unknown. Avoiding breaking such users is one reason why 188.8.131.52 was just 184.108.40.206 plus a warning. Of course, the other big reason for not going Beta was Experimental and Canary only had about two weeks of testing, whereas now they’ve had about two months and seem to work OK.
For past FluentFTP background, please see current post here or the framework update discussion here.
To add to the request… it sounds like there was an attempt at having an automated release process… if not used for canary it would be great to have it for nightly’s. It’s important to get fixes out to some testers as soon as possible since not all testers want to perform builds to get the latest for testing. It reduces friction for the testers.
Agree. I’m still not set up for builds, but there are times when I’d like a nightly, e.g. if I can talk someone into a FluentFTP update (to see if that can get there before the canary build does…). It’d also be great to have more testers at any release level – preferably people with some expertise and an ability to file good issues.
I’m not certain unit tests are the mechanism, but I don’t know how Duplicati’s focus. Sometimes a “stress” test (thinking of “pounded”) is a different thing, and may require more equipment than a typical person has. There is occasionally talk of shared resources for the team, but who’s going to manage the infrastructure?
By pounded I’m thinking putting Duplicati in less common scenarios but still very important, such as killing the process during backups etc. to try and find any weaknesses in the file and database handling.
More of an immediate hit list. I think we’ll try using the github features to track this but it needs to be populated with issues first.
Perhaps most pressing will be to handle the change needed for Google Drive.
A bunch of questions…
Is a notification method needed instead of pushing out a release to get a notification out? There could be a github file for a canary-notice and a release-notice that the web client could be periodically checking.
Is staying with angular1 fine or are there benefits to moving to a newer version?
For the usage-reporter, I have it deployed to my own GAE but I might be missing some steps like the db storage. might there be instructions for setting up the usage-reporter server pieces for development? Or any tips on getting it setup?
For me, the top issues are the ones involving database corruption. I think we’ve made some good progress on this (thanks @ts678 and @BlueBlock), but I believe there are still some issues to take care of. The difficult part is coming up with reproducible examples. @ts678 has done a great job of tracking some of these down.
I took a stab at looking at the titles of 800+ issues last night. Main one that’s clearly going to bite hard is Google Drive if they actually go through with it. They’re being surprisingly quiet for a looming hard cutoff.
Beyond that, I’m wondering if GitHub might need to be populated (for milestone purposes) with specially created meta-issues representing particular failure modes, trying to find some pattern across all cases.
I spend more time in the forum (partly because there’s a lot of activity and relatively few people covering) meaning I might go about this by looking for patterns there, discounting what we’ve maybe just fixed, and determining if the main issues have any GitHub issues behind them. Often forum users won’t file issues.
But great idea to see what’s painful, and I’m glad we took a whack at the Issue1400 recent reappearance along with at least one route to “Detected non-empty blocksets with no associated blocks!”. Stop now or restarting the host is said in said to be another path, so @BlueBlock idea of continuing work on “stop” code fixes, or deliberately pounding Duplicati a bit with kills or shutdowns might find a reproducible case.
We could do that. Maybe bundle it with the update checking to avoid two server calls. OTOH, if the message is checked more frequently, that is fine too.
I am not sure if new projects are started with AngularJS, but it seems that Google is still maintaining it.
If we move to someting like Vue, the approach is similar. Angular2 has enough changes that it would require more rewriting.
Anyway, if someone wants to work on that, we can easily have multiple frontends running simultaneously (we had that back when it was all based on jQuery).
The usage-reporter currently just collects the stats in a database. I don’t recall if there is a way to redirect the url, but we can add a check for an environment variable.