Extremely slow web - fast commandline execution

Today my duplicati web frontend became insanely slow. At first I thought it had completely lost connection to backend, but when editing backup-set after 2.5-4 minutes the settings appear. Also info under ‘About’ is slow to appear. I’m running linuxserver docker.

I tried restoring backup of ‘Duplicati-server.sqlite’ from yesterday night, but that didn’t work. Recreating docker container sometimes fixes it for a very short period of time. I have backups of my jobs created with Duplicati Client, and I figured out how to backup my global settings (‘edit text’), so I can probably easily start over, by deleting ‘Duplicati-server.sqlite’.

I saw the problem the first time after I ran verify in the GUI (and also some compacting, but those worked).

Any thoughts? Is this a known problem? Should I supply some settings for debug, or should I just try to recreate the db it and see if that resolves the issue?

I’m running:

Duplicati -

Are you limiting the resources (either RAM or CPU) your docker container can use?

I don’t have experience with the linuxserver image, but I use the official Duplicati docker image on my Synology NAS. I have not experienced a problem like you describe.

no, I don’t have any limitation on CPU and RAM, and it has been running fine for half a year. Now I just recently added 7TB of backup of around maybe 700.000 files, but I doubt that is related, as the problem start even before it loads the databases (start the backup operations).

The title says “fast commandline execution”, but that’s not explained that I noticed. Can you explain?

Connections can be slow but eventually finish due to retries. What is the network path into Duplicati?
If you have docker then perhaps the host is Linux or something that can show attempts to send data.

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State 
$ netstat -an | grep 8200
tcp        0      0*               LISTEN     
tcp        0      0          ESTABLISHED
tcp        0      0         ESTABLISHED
tcp        0      0          ESTABLISHED
tcp        0      0         ESTABLISHED
tcp        0      0         ESTABLISHED
tcp        0      0         ESTABLISHED
tcp        0      0          ESTABLISHED
tcp        0      0          ESTABLISHED
tcp        0      0          ESTABLISHED
tcp        0      0         ESTABLISHED
tcp        0      0          ESTABLISHED
tcp        0      0         ESTABLISHED

is perhaps the Duplicati default port you use. If you see Send-Q rising, it may be data trying to get out.
For any performance issue, checking system performance tools (CPU, disk, memory) can also help.

Yes, about the commandline, its normal and fast. No many-minutes delay in getting output or a backup running. By commandline I mean something like this this (with options removed):

$ docker exec duplicati mono /app/duplicati/Duplicati.CommandLine.exe backup "ssh:// /mnt/user/ --log-file-log-level=Verbose ...

I actually don’t think the backup jobs that has been executed automatically by duplicati have been slow, according the my logs, they seem to run at normal speed. I’m logging at information level and can only see Information-Duplicati.Library.Main.Controller-StartingOperation, not a duration at completion, but it seems to have been normal speed. So I think its just the web-front-end that’s fck’ed.

The netstart gives just this

$ netstat -an | grep 8200
tcp        0      0  *               LISTEN

While the web browser is connected and struggling (or not) to interact with the browser?
There should be some connection showing as ESTABLISHED to whatever port you use.

If Docker does something odd about ports, then I hope some expert can come help here.

Yeah, I ran it looped with sleep 1 on the host, and it never showed me anything else. I couldn’t run it inside the docker as well, in case that makes sense, as netstat wasn’t installed in the docker image.

It did have stuff like this, maybe that is docker magic (but no match 8200 besides the LISTEN)

tcp        0      0 Tower:59786             Tower:40379             ESTABLISHED
tcp       25      0 Tower.lan:46540         ec2-3-248-111-47.:https CLOSE_WAIT
tcp        0      0 Tower:40379             Tower:59786             ESTABLISHED

Here’s the port-setup on the docker.

$ docker ps | grep duplicati
6d9cdef81f52   linuxserver/duplicati              "/init"                  19 minutes ago   Up 19 minutes   >8200/tcp                                                                           duplicati

I tried removing Duplicati-server.sqlite and restarting docker and then it was still slow, as in I had trouble saving global settings, it just hung. Note that here I didn’t recreate the docker-container, which I have sometimes experienced worked for a short period.

Then I moved to a new empty config folder, and recreated docker image, and since then it has been snappy. Now I’ve recreated all by backups via duplicati-client, but it has changed to new database-names. After restoring the original database name (and copying the file back in to the new folder) I still had to choose repair (it said 178 files not in local storage, which was the #files on the remote). Do you know if that’s normal? Shouldn’t it just start with everything back in sync, if I recreate backups from exports (via duplicati-client), and then update database names of each?

The good thing is the web-server has been snappy so far, but then I suspect issue will return, since the issue was still there weirdly enough after removing Duplicati-server.sqlite, and restarting docker. I’ll play around some more tomorrow.

Oh and thank you for the help, its much appreciated.

Perhaps precisely “Found 178 remote files that are not recorded in local storage, please run repair”?

Depends on how update is done, and what’s there. If you actually got the old database content there,
it should have had its old information on remote files, and not been surprised to see the remote files.

I’m not sure what’s where (and I don’t use Docker), but recreating the image by itself shouldn’t need
config folder fiddling because the config folder is on the host. Maybe duplicati-client work needed it…


Yeah when I don’t delete the Duplicati-server.sqlite, then I can recreated the docker-container without losing anything, because as you say my databases are on the host. When I deleted Duplicati-server.sqlite, I had to restore the backup-jobs, and manually restore the job-database. But manually restoring the database didn’t help Duplicati to avoid a repair, which surprised me.

Surprises me too. Check full database paths carefully, maybe take a copy of the believed-former database, then have the UI Recreate. See if it deletes the ought-to-be-fine one and replaces it with its own rebuilt one.

Other ways of testing what you got are to look at logs (any there?), or see if Restore screen versions exist.

For heavy inspection, your distro might have an sqlitebrowser package to look inside various databases.

I found the issue. Apparently Duplicati can only handle being in 4 open tabs (tested Brave+Chrome). When opening the 5th its extremely slow to get data from backend, like when showing live log, or About or edit (it gets the static data, but not the dynamically fetched data from backend).

Once during testing when I opened many tabs to quickly, it gave me ‘serviceUnavailable’ and another time empty html-page with ‘Request Queue is full’, those two cases didn’t count towards the maximum of 4 open tabs.

Should we move this into a bug-report? It would be nice to get just some better service-message, like an error saying ‘slow down mate, close tabs already…!’ (or something more descriptive :slight_smile: )

I also disovered it was my bad regarding restoring local database. It was kind enough to let me know it couldn’t find the new database file when I invoked verify (it didn’t on run or save database changes). And then I ran into this nice issue of silent remote data-wipe if old local database or database of another job is restored and repair is clicked.

You could, but developers are far too few, and this is in a somewhat unusual area involving web code.

Or it might be that the browser has limits, which may or may not be possible to bypass from Duplicati.
I think your browsers are Chromium-based. I tried Edge (also Chromium), and it got to Stalled state, meaning the request was never sent to Duplicati. Use your developer tools (maybe F12) to take a look.
Network activity Timing tab seems to do Request sent after it leaves Stalled, but this can be slow.

Explanation from Edge explains. Mine was getting stuck at seventh duplicate tab, which might be from:

There are already six TCP connections open for this origin, which is the limit.

I went down this path after watching packets to Duplicati, and not seeing expected request being sent.
Testing from Firefox during this Edge-is-slow period went right through. I haven’t tested Firefox’s limits.

Possibly relevant, even if some details don’t match:

Chrome stalls when making multiple requests to same resource?

Under what circumstances will my browser attempt to re-use a TCP connection for multiple requests?

It needs another test, but I think I got another request to go out by using instead of localhost.
This might be because the difference in the URL was enough to make browser think URL was different.

Yeah I see the point in not creating a bug-report. How about an faq or ‘known issues’ entry? We don’t have an FAQ do we? We could create an page under the manual > articles Duplicati 2 User's Manual

Would that make sense and be of interest? I might not get read, but it might be a place to look and a place where a search-engine might bring people. Its probably useless if it is only this one entry (which is how everything starts), but it might grow if there’s interest from those who maintain it an help people out here on the forum. Right now I know of two candidates for entries, named namely this web-issue and the data-loss issue due to duplicati deleting files without warning on the remote. The latter hopefully gets fixed, but that might take a while.

There’s a stale FAQ that I think predates this forum and the manual, and is rarely found. I don’t suggest it.

The manual home page says how to do such things, but it’s cumbersome to require GitHub pull requests.
Organizationally, the manual is an individual project, and has changed little lately. Help might be welcome.
If you have an interest in documentation, I think I can point to some areas where code is ahead of docs…

There was some internal discussion on some ways to give information that’s not typical manual material.
IMO: to be beneficial, it needs to be easy to use, and used enough to make it worth the trouble of writing it.
Search could be useful to save people from having to read through however long this winds up becoming.

To avoid finding, installing, and maintaining yet-another-tool, I thought maybe a forum category could work, however I would see if it could be more of an officially-maintained thing and not free-for-all chaos like here. Organizational capability might be kind of limited, but at least one would get a list of headlines to glance at.

Whatever is written might need periodic review, so there’s more of a workload on the few busy volunteers.
This would probably need some additional volunteers (perhaps you?) who are interested in launching this.

We might wish to make sure that it was really frequently-asked. An awful lot of things (maybe this?) aren’t.

We would want things that are well-enough understood to write up well, and offer some useful comments.
Challenges include varying reader expertise levels, and wanting to avoid detailing everything everywhere…
For that purpose, hyperlinks might be useful, until they break… Might also want a way to collect feedback.

Not everything is a question or an issue. A recent example is Good practices for well-maintained backups.

Ideas are good, but need refining and volunteers. If you (or anyone else) has an interest, please speak up.
Having put that out (and I’m not sure how many will notice it in here), care to begin a topic directly on this?

Yeah I do like to pass on what I’ve learned. I’m considering options. I think you are right in that its not that important an issue to collect in a FAQ. I’m considering writing a new guide if I find the time.

PS: I also did an update of the filters guide. The git-change set was accepted already the day after, but it slower to go live. Ahh its live on the net now, that so good to see :slight_smile:

Very nice job! I’ll have to go read. Filters are a topic I find confusing. I’m not sure if it’s docs, code, or me.

Updated filters appendix #86 is the pull request (another thing I’m not good at) and you spoke of the wiki, recognizing that Duplicati has over the years had a lot of places where information gets stashed away…

Few people have the diligence to round up all the knowledge on a topic (and weed out whatever’s stale).

I’m glad to see that part of the process working. If you have an interest in other manual work, I think there would be known gaps to fill that aren’t quite as tough as the issue I filed asking how to explain retention…

There are lots of ways, such as helping on the forum or in issues. If it gets too repetitive, try documenting somewhere. Retention always confuses people, but filters possibly confuses people more, so thank you.

Personally, I link to manual sections a lot, so prefer a complete manual. Others post names but not links.
That’s the forum-directed solution, but what’s the self-service one (or is it possible)? I’m still not certain…