There is a memcache, yes. I think for v2 authid it only caches the access token so that when Duplicati supplies its refresh token (aka v2 authid) to the oauth service can send a cached access token back if it is still valid, instead of having to send a request (token refresh request) to Jottacloud every time.
I don’t think this is true. The “invalidated” refresh token can still be used, until the 12/24 hours stale token event.
Correct. I consider the OAuth handler change the “correct” fix. The Duplicati fix is to some degree more of a workaround, partially because I thought it would get released much quicker (but it seems not…). They both do the same thing with regards to this issue: Prevent authid auto-upgrade from v1 to v2. I still think it is best to include both, for completeness, robustness, etc. But for testing, you only need one: Either run Duplicati from master build (or use the dll patch), or run release version of Duplicati against beta OAuth service.
Sorry for the radio silence.
It is now working for me:
I am running FreeBSD 188.8.131.52_canary_2022-06-15 (as I was before) but I did not realise I had to use the unofficial DLLs as well.
Since I did I have not had a single issue - even with a backup running for days with millions of files (before using the DLLs it would invalidate the token with every backup that ran longer than a couple of hours).
The solution is still working for me here but I have noticed that Duplicati uses much less bandwidth now. Only about 10-20mpbs while I have about twice as much available. Has anyone else a similar behaviour?
I have well under 5TB uploaded to JC
jotta-cli uploads as fast as expected on the same machine
Hi! After a semi-thorough market inventory I just settled on Jottacloud + Duplicati to replace Crashplan. Running into this immediately was a bit of a cold shower. I’ve read through this whole thread, switched to the Canary build (184.108.40.206_canary_2022-06-15) and patched with the three DLLs linked from here (which I’m really not very comfortable with, but a bit desperate to get it working). I still get error 500 after a couple of hours and have to generate a new CLI key. Did I miss some step?
And now it’s working fine. The last 1.5 TB uploaded without issues. Don’t know what the problem was. I initially missed that I had to install the complete 2022-06-15 build and not just use autoupdate to it, but even after remedying that I still had the OAuth problems. Also couldn’t install the canary .deb file - had to download the zip archive and extract on top of the installation. Plus patched files, so it’s a bastard of an install. Hope we can get this mainlined at some point.
Probably slightly above zero, but release manager had no time last month and no forecast on time.
You can watch Releases category, where questions of this sort come up. Last reply was yesterday.
Thanks to the volunteers (including on this topic) who keep Duplicati going, but more are needed…
Developers at all levels and areas are encouraged, but release management is kind of a large role, depending on how it’s defined. Releasing what’s already committed (all 4 changes) might be easier compared to ongoing pull request work. Maybe I’ll ask about time again later if no release emerges.
Once this particular bug is known to be rooted out, could you consider changing the forum configuration to close automatically the threads without changes since one month ? This kind of eternal threads that have no relation to the original first post are tiresome - even impossible - to read, and quite often can be used by spammers. It may be that you don’t have the necessary rights of course.
Even though this topic is an example of an ultra-long one, I’m not sure it’s the place to discuss policy.
Got a sample site that does this? Closing topics could lead to topic proliferation which may be worse.
Information gets spread across lots of topics (making a mess), instead of down one (making a mess).
I think the Mar 2022 return of activity here actually fits this original post and topic title well, doesn’t it?
Possibly it could have been forked then, but you’d still have May 2022 to April 2023 on the same bug.
If we can get to faster fixes, that would help shorten long topics, but forum is still a poor issue tracker.
What to do about antique GitHub Issues is another question, but also doesn’t belong under this topic.
this kind of feature is best used with forums with high level of complaints with repetitive subjects, such as ‘can’t renew certificate’, that can potentially lead to hundreds of ‘me too’ follow-ups although a generic problem can happen for many different reasons.
For Duplicati, it could be ‘cannot backup’. It’s tidier when new threads are created because otherwise opportunistic posters feel free to not post any details (‘I have the same problem !’ is the whole of the problem report).
If the thread is continuously kept alive, it’s usually the same problem. It’s always possible to bump a post by editing it if one still cares about it.
And yes I know that’s off topic but it’s the whole point that long threads always get off topic