Jottacloud Error 401 (Unauthorized)

There is a memcache, yes. I think for v2 authid it only caches the access token so that when Duplicati supplies its refresh token (aka v2 authid) to the oauth service can send a cached access token back if it is still valid, instead of having to send a request (token refresh request) to Jottacloud every time.

I don’t think this is true. The “invalidated” refresh token can still be used, until the 12/24 hours stale token event.

Correct. I consider the OAuth handler change the “correct” fix. The Duplicati fix is to some degree more of a workaround, partially because I thought it would get released much quicker (but it seems not…). They both do the same thing with regards to this issue: Prevent authid auto-upgrade from v1 to v2. I still think it is best to include both, for completeness, robustness, etc. But for testing, you only need one: Either run Duplicati from master build (or use the dll patch), or run release version of Duplicati against beta OAuth service.

The v2 authid has been around in the oauth service since 2016, and from the start it sent back the v2 authid together with a v1 response: Support for v2 tokens that are just the refresh tokens · duplicati/oauth-handler@07c3061 · GitHub

That is a good question!

I’ve been using rclone while waiting for the fix to make it to release, and in the last few days it has stopped working.

It gets stuck deleting unwanted files on my daily backups, and when I try to do a full verify with download of all remote files, that hangs on the first or second file over a few megabytes in size.

I think I will try regenerating the token next, but it appears that something has changed and rclone no longer works. I’ll do some research to see if any other rclone users are having this issue.

I’m using rclone with Jottacloud myself, and have not seen any problems with it. But I’ve not used rclone as a backend for Duplicati.

Sorry for the radio silence.
It is now working for me:
I am running FreeBSD (as I was before) but I did not realise I had to use the unofficial DLLs as well.
Since I did I have not had a single issue - even with a backup running for days with millions of files (before using the DLLs it would invalidate the token with every backup that ran longer than a couple of hours).

Thanks a lot for the support & patience.!

1 Like

The solution is still working for me here but I have noticed that Duplicati uses much less bandwidth now. Only about 10-20mpbs while I have about twice as much available. Has anyone else a similar behaviour?

  • I have well under 5TB uploaded to JC
  • jotta-cli uploads as fast as expected on the same machine
  • Throttling is turned off.

Don’t know the reason for that, but just mentioning that I have a PR to change the Jottacloud backend to use a newer upload API, same as currently used by rclone and probably jotta-cli: Jottacloud upload implementation changed to use new api by albertony · Pull Request #4715 · duplicati/duplicati · GitHub. I’ve left it as a draft for quite some time, because I don’t want to complete it until the authentication issues are out of the way (and also until I see that it is worth investing more of my time on it, i.e. other completed PRs actually gets merged and released).

Hi! After a semi-thorough market inventory I just settled on Jottacloud + Duplicati to replace Crashplan. Running into this immediately was a bit of a cold shower. :joy: I’ve read through this whole thread, switched to the Canary build ( and patched with the three DLLs linked from here (which I’m really not very comfortable with, but a bit desperate to get it working). I still get error 500 after a couple of hours and have to generate a new CLI key. Did I miss some step?

Since I like JottaCloud photo sync better than Qfile sync, I want to sync the other way, from JotttaCloud to my Qnap NAS. Anyone knows of a way to do this?

Windows MSI fresh install
albertony/duplicati at jottacloud-disable-v2-authid ( (now merged into duplicati/duplicati) 3 patched DLLs
–asynchronous-concurrent-upload-limit unset (duplicati-oauth-handler PR/10 code)
The beta handler helped; my backups are usually fine now. But I still get tokens invalidated during deletion of large file sets very rarely, could be unrelated since it is so rare.

I often fail during the deletion of large file sets if there is anything significant being uploaded (>2GB).

And now it’s working fine. The last 1.5 TB uploaded without issues. Don’t know what the problem was. I initially missed that I had to install the complete 2022-06-15 build and not just use autoupdate to it, but even after remedying that I still had the OAuth problems. Also couldn’t install the canary .deb file - had to download the zip archive and extract on top of the installation. Plus patched files, so it’s a bastard of an install. Hope we can get this mainlined at some point.

For what it’s worth, rclone still works very well. Not ideal but it does mean you can use the normal stable version.

Any chance for those changes to make it into the released canary?

Probably slightly above zero, but release manager had no time last month and no forecast on time.
You can watch Releases category, where questions of this sort come up. Last reply was yesterday.

Thanks to the volunteers (including on this topic) who keep Duplicati going, but more are needed…

Developers at all levels and areas are encouraged, but release management is kind of a large role, depending on how it’s defined. Releasing what’s already committed (all 4 changes) might be easier compared to ongoing pull request work. Maybe I’ll ask about time again later if no release emerges.

1 Like

Still not working? :confused::thinking:

Please try the latest canary… :crossed_fingers:

Please test today’s Canary. It replaces the 3 DLL workaround posted above that some were using.
There are later potential Jottacloud changes, but this is the change in-but-unreleased since Aug 7.

Once this particular bug is known to be rooted out, could you consider changing the forum configuration to close automatically the threads without changes since one month ? This kind of eternal threads that have no relation to the original first post are tiresome - even impossible - to read, and quite often can be used by spammers. It may be that you don’t have the necessary rights of course.

Even though this topic is an example of an ultra-long one, I’m not sure it’s the place to discuss policy.
Got a sample site that does this? Closing topics could lead to topic proliferation which may be worse.
Information gets spread across lots of topics (making a mess), instead of down one (making a mess).

I think the Mar 2022 return of activity here actually fits this original post and topic title well, doesn’t it?
Possibly it could have been forked then, but you’d still have May 2022 to April 2023 on the same bug.

If we can get to faster fixes, that would help shorten long topics, but forum is still a poor issue tracker.
What to do about antique GitHub Issues is another question, but also doesn’t belong under this topic.

Let’s Encrypt forum

this kind of feature is best used with forums with high level of complaints with repetitive subjects, such as ‘can’t renew certificate’, that can potentially lead to hundreds of ‘me too’ follow-ups although a generic problem can happen for many different reasons.
For Duplicati, it could be ‘cannot backup’. It’s tidier when new threads are created because otherwise opportunistic posters feel free to not post any details (‘I have the same problem !’ is the whole of the problem report).

If the thread is continuously kept alive, it’s usually the same problem. It’s always possible to bump a post by editing it if one still cares about it.

And yes I know that’s off topic but it’s the whole point that long threads always get off topic :slight_smile: