Something wrong with web UI

Hi all,
starting from roughly one week, I noticed the UI behaving strangely. As soon as I log in it tells connection lost (there’s a popup), then a countdown (still a popup), then it logins automatically (without clicking anything), then I notice on the top of the page that it tries to “verify the backend data”, it starts to count files, then before it completes it says again that connection is lost. It seems that only one of the two configurations is problematic.



My configuration is quite simple, I have the linuxserver/duplicati docker image (“latest” tag uses “beta” branch). Only two identical jobs, pointing to two identical buckets on remote provider.

Docker container logs can be summarized as long list of:

...
Connection to localhost (127.0.0.1) 8200 port [tcp/*] succeeded!
Server has started and is listening on port 8200
Inside getter
Connection to localhost (127.0.0.1) 8200 port [tcp/*] succeeded!
Server has started and is listening on port 8200
Inside getter
Connection to localhost (127.0.0.1) 8200 port [tcp/*] succeeded!
Server has started and is listening on port 8200
Inside getter
Connection to localhost (127.0.0.1) 8200 port [tcp/*] succeeded!
Server has started and is listening on port 8200
Inside getter
...

not sure if those are my uptime checker server which pings regularly duplicati… however, scrolling a bit above I have:

fail: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[1]
      An unhandled exception has occurred while executing the request.
      Duplicati.WebserverCore.Exceptions.NotFoundException: No active backup
         at Duplicati.WebserverCore.Endpoints.V1.ProgressState.Execute()
         at lambda_method245(Closure, EndpointFilterInvocationContext)
         at Duplicati.WebserverCore.Middlewares.HostnameFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
         at Duplicati.WebserverCore.Middlewares.LanguageFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next)
         at Microsoft.AspNetCore.Http.RequestDelegateFactory.<ExecuteValueTaskOfObject>g__ExecuteAwaited|129_0(ValueTask`1 valueTask, HttpContext httpContext, JsonTypeInfo`1 jsonTypeInfo)
         at Duplicati.WebserverCore.Middlewares.WebsocketExtensions.<>c__DisplayClass0_0.<<UseNotifications>b__0>d.MoveNext()
      --- End of stack trace from previous location ---
         at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.<Invoke>g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)

…which I am not sure if it’s a cause or a consequence of something else.
Do you have any idea?

The server has been run kind of unmaintained for years now. Never an issue… Docker image is updated automatically in background (now I am on Duplicati - 2.1.0.2_beta_2024-11-29)

Please not that the issue is not systematic. For example this morning it failed the backup job (notice last successful one is from yesterday). Maybe last failed one is from 2-3 days ago: it’s somehow intermittent, and it seems to me that only affects “Documents” and not “Pictures”. But again, in my opinion they are almost identical configurations and identical remote buckets. Also, I am pretty sure no new document/picture has been added to the collection, so I am excluding that some problematic files can cause this.

Hi @funkysloth, welcome to the forum.

The timing is most likely related to the new beta release that went out 2024-11-29, which also had a rewrite of the authentication code.

The “connection lost” dialog is only checking the connection from the browser to Duplicati. Duplicati may well be running fine in the background, as if you were not logged in at all.

The countdown should only happen if there are two consecutive login failures. Can you try to clear the browser cache?

Also, if you can try the developer tools in your browser, maybe we can get a closer look at what requests are failing and indentify the sequence.

That one is mostly log-spam caused by an older design of the API. If no backup task is active, it returns 404, which the logging code thinks is a failed request and logs it. It is safe to ignore.

I do not fully understand this part. The screenshots are just for the login, that view has no effect on the running backup. What error messages do you see on the failed backups?

This is the log of Firefox console using “Incognito mode” (not sure if you meant this)

Where do I get the logs of failed backups? From the UI I can see that scheduled backups didn’t run at all (it is supposed to run at 3 a.m. every night)…

However I am noticing that one of the two sqlite files is getting surprisingly big (17GB doesn’t seem sound to me).

Yes, exactly what I meant. You can see that the error is 502, meaning that your browser has a connecting to the proxy machine, but fails from the proxy server back to Duplicati for some reason.

Looking back at the original message you posted:

This clearly indicates that Duplicati has restarted for some reason and would explain the 502 errors and the UI experience.

Next question is then: why does it restart? The logs here do not help in that regard.
If Duplicati itself is crashing, there should be a crashlog-Duplicati.txt file in the config folder (since it has started the server), otherwise in the system temp folder (which normally is wiped on Docker restart).

Could it be some monitoring or similar that detects an out-of-memory or something similar that stops it?

This indicates that the backup starts (so the scheduler thinks it has started) but then crashes afterwards.

That is a different question, but it depends on how much data is included in the backup (number of files, number of versions, size of data, number of remote files, etc).

You can see from the journal file, that an operation is active on the database (or it crashed while it was writing).

Is the job that does not run using the Storj backend? There is another report that it crashes each time it tries to run a Storj backup.

Yes StorJ is involved. I know that the database size is not very indicative, but I noticed that automatic database backup from the 30th Nov. for both configurations are ~200MB and ~1GB. In the picture you can see that the DB of 200MB became 17GB in 10 days without any change in the monitored/backup folder.

I also tried to delete both SQLITE files of the two configurations and let Duplicati recreate them. It couldn’t succeed. Shit!

At this point I guess that the best way is to increase the log verbosity more and collect them. Can you guide me in this? I would like to:

  • increase log verbosity to a reasonable level. Is this something that can be passed as env var?
  • store logs somewhere that is not lost (like a log file in a folder mounted as docker volume). Can this be set as env var?

That image shows the completion time. Please view one to see if start time was around 3 a.m. Alternatively, the Restore menu shows start times. I’m not sure why the choices were different.
Having no changes will (by default) not create a new backup version, but it should make a log.

When in a log, you can also see a summary of what was added or changed. What does it say?

If Dec 9-12 were manual, check About → Show log → Stored for signs of issues around 3 a.m.

I’ve looked at all the pictures a few times, and all I can see is 17 GB. Did I miss a picture there?

I will check logs tonight. Regarding sqlite size, if you look at that picture you can see “IJAAKJDFRR.sqlite” is 17GB, while “backup IJAAKJDFRR 202411300300.sqlite” is only 196MB. I think the backup used to be 200MB (consider the whole folder of documents which backup-ed less than 50GB). In just 10 days it grew to 17GB (without changes in the monitored folder). I think those 17GB are just messages of failures…

Thanks for pointing out the backup file. Those are made when (maybe among other things) a database version upgrade occurs. Your 11/30 one might be from a quick pickup of 11/29 Beta.

Looking forward to logs info. They will also show the start and stop times of the backups, and message of failures to some extent, except they are quite limited by default, or maybe always.

Failures will make GUI popups (none mentioned) and log dots should not be green (they are).

The job log Complete log has a BackendStatistics section with uploads and KnownFileSize. There’s no direct reporting of database size, but growth at backend would go with DB growth.

If you can look at the files on the destination, sorted by date, that’s another way to see action. Backup will upload dblock and dindex files with changes, then upload a dlist file to end things.

<job> → Show log → Remote is another way to see that action (reverse chronological order).

1 Like

Ok, I tried to restore the DB, it tried to setup but it found a mismatch between files and DB.


Then I tried to repair as suggested.

Not having those (oldest is 17th Dec), maybe because I did some trial/error with older databases and I might have lost older entries… not sure why I don’t have much.

Then I tried repair with --rebuild-missing-dblock-files

  Listing remote folder ...
remote file duplicati-i435d8dd9ca6349e09716eef7d5641aae.dindex.zip.aes is listed as Verified with size 893 but should be 28701, please verify the sha256 hash "nygqBBpGUJkxgbftxAgsp3JV3FSzxCCrS8OV21ucdGs="
remote file duplicati-ibf291a5e41c94382a5f765b075bee3ff.dindex.zip.aes is listed as Verified with size 893 but should be 28685, please verify the sha256 hash "wpi2r4Z4Kl6G8pK++S9puvvdxnhrEUa91ZWpX4JfCpo="
remote file duplicati-ib552eaad51d2480a9b546583840df41b.dindex.zip.aes is listed as Verified with size 893 but should be 28685, please verify the sha256 hash "4mIBqcHQYc7qMkiUOT5d8I1j8PvYcFLPaGOlaVRYFmQ="
remote file duplicati-i8ab3e15218804d6a85f29aa4d30dc15e.dindex.zip.aes is listed as Verified with size 893 but should be 28685, please verify the sha256 hash "D4EvHoA3ZaLQJZHAZ8P3SdC/j/8gyn6yN9i/XpO/x60="
  Downloading file duplicati-i435d8dd9ca6349e09716eef7d5641aae.dindex.zip.aes (unknown) ...
Failed to perform verification for file: duplicati-i435d8dd9ca6349e09716eef7d5641aae.dindex.zip.aes, please run verify; message: Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=] => Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=]
  Downloading file duplicati-ibf291a5e41c94382a5f765b075bee3ff.dindex.zip.aes (unknown) ...
Failed to perform verification for file: duplicati-ibf291a5e41c94382a5f765b075bee3ff.dindex.zip.aes, please run verify; message: Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=] => Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=]
  Downloading file duplicati-ib552eaad51d2480a9b546583840df41b.dindex.zip.aes (unknown) ...
Failed to perform verification for file: duplicati-ib552eaad51d2480a9b546583840df41b.dindex.zip.aes, please run verify; message: Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=] => Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=]
  Downloading file duplicati-i8ab3e15218804d6a85f29aa4d30dc15e.dindex.zip.aes (unknown) ...
Failed to perform verification for file: duplicati-i8ab3e15218804d6a85f29aa4d30dc15e.dindex.zip.aes, please run verify; message: Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=] => Remote verification failure: [Missing, +EBmnwWD5s4d0nfbfHbVrEzgE06Z12e8hapu3hxw4TQ=]
Failed to perform cleanup for missing file: duplicati-b6b8f75f021ab4f36a098fa1a456755b3.dblock.zip.aes, message: Unexpected empty block volume: duplicati-b6b8f75f021ab4f36a098fa1a456755b3.dblock.zip.aes => Unexpected empty block volume: duplicati-b6b8f75f021ab4f36a098fa1a456755b3.dblock.zip.aes
Failed to perform cleanup for missing file: duplicati-b412aa3f99e394f5c9cc0671bf7bcf0ce.dblock.zip.aes, message: Unexpected empty block volume: duplicati-b412aa3f99e394f5c9cc0671bf7bcf0ce.dblock.zip.aes => Unexpected empty block volume: duplicati-b412aa3f99e394f5c9cc0671bf7bcf0ce.dblock.zip.aes
Failed to perform cleanup for missing file: duplicati-b60da6d5724004abda1fc8dcbe06ad238.dblock.zip.aes, message: Unexpected empty block volume: duplicati-b60da6d5724004abda1fc8dcbe06ad238.dblock.zip.aes => Unexpected empty block volume: duplicati-b60da6d5724004abda1fc8dcbe06ad238.dblock.zip.aes
Failed to perform cleanup for missing file: duplicati-bae477da4a74e41b59a2df6b43410f802.dblock.zip.aes, message: Unexpected empty block volume: duplicati-bae477da4a74e41b59a2df6b43410f802.dblock.zip.aes => Unexpected empty block volume: duplicati-bae477da4a74e41b59a2df6b43410f802.dblock.zip.aes
Return code: 0

Why? Did you save the old DB that (hopefully) matched the destination after last backup run?

It invites pains if the old DB is different from current DB in anything other than version update. Database knows what it expects,and will cite mismatches, though extra files is more common.

OK, I see that later as 8826 remote files that are not recorded in the database (they’re newer).

What storage type is the Destination? It seems to have (possibly) corrupted a whole lot of files.

I tried “repair” not restore. I had renamed the (maybe bad) local sqlite files of Documents/Pictures configurations (so taht duplicati cannot see them) and then run “repair” with " --rebuild-missing-dblock-files". It seemed to progress but then at some point I get “connection lost” (I am via web UI) and I needed to start over. I should need maybe to run those from a “docker exec …” but it’s not easy.

Now: ideally I would like to get to a working state locally and remotely. I am not scared of losing files (neither locally nor remotely). I have a way to rsync locally from another source, and then from there update remote backups (StorJ). What’s the most “aggressive” way of doing it? Something like “Backup local files to remote. If few remote files are bad you can delete, if few local files are bad you can delete”. Once this is consistent and fine I can rsync local folders from another copy and update remote again.

This might be part of the problem. Some problem got reported, but the other one is a crash.

If you managed to hide the database, you ran a DB recreate. I don’t think it takes that option.

Unless this was GUI Commandline or maybe the Repair button, DB location may also be off.

So the following gets amended:

but that image is from a Backup. Then there’s a repair whose log does look like a real Repair.

but if that’s this:

It seems to have a database then, as its first 4 lines complain that size doesn’t match database.

This is being reported far more often than I would like. I’m not sure devs have any good theories.

What’s on that source? I don’t know what you mean.

Remote had better not be a Duplicati backup unless the source is valid Duplicati backup files.

I’m totally lost on what you’re trying to do. Are you asking for non-Duplicati file backup ideas?

rclone sync to Storj can probably put at least one version to Storj. Sync is not well versioned.

EDIT:

If you want a versioned Duplicati backup, but Duplicati Storj works poorly (still somewhat TBD), backing up to a local drive and then doing rclone sync of that might work, but it’s pretty clunky.

To see if you really have damaged files on Storj, you could pull one down and see if it decrypts.
AES Crypt is a GUI tool, or Duplicati ships a CLI tool. I’m not sure about Docker, but Linux has:

/usr/bin/duplicati-aescrypt -> ../lib/duplicati/duplicati-aescrypt

This seems odd, why are the index files so small and why do they change sizes? 893 bytes looks perhaps more like an error message than an actual file?

Docker has it in /opt/duplicati/duplicati-aescrypt

Sorry for the late reply. I am super busy these days. Every night I tried to spend some time on this but I never succeeded. At this point I am ready to throw remote backups on StorJ away and start again from scratch (I will lose snapshots but I can live with it).

Bottom line is that the web UI is barely unusable for me. No matter what, after some time I get “connection lost” (sometimes few seconds, sometimes 1 minute). All the attempts of repairing the database failed with that “connection lost”. I suspect that this loss in connection might be the consequence and not the cause. Either way, it’s impossible to do anything via the UI for me.

I also have tried to export the 2 configurations in json. I have created a new Duplicati server instance and imported the 2 json, with the same result. Backup actions will throw error and will ask for “delete database and repair”, repair action will time out for that connection error.

I also have tried with running those commands from within the docker container (in this way web connection should be out of the equation), but I get Segmentation fault (core dumped).

At this point I am quite helpless, I think I am not skilled enough to understand the mechanic behind Duplicati, nor I the precision to interact with your off line support (which I really appreciate by the way). I will try a backup from scratch, emptying remote buckets and eventually (if that fails) I will consider another software.

What’s on that source? I don’t know what you mean.

I feel I cannot explain myself (my fault :slight_smile: I not even sure of correct words to use), this also demotivated me in this recovery process.

However, my setup is quite simple (from my perspective)

   [NAS1] --> rsync --> [NAS2] --> duplicati --> [StorJ]
  documents            documents              dblocks/indices
(unencrypted)        (unencrypted)             (encrypted)
(w/ snapshots)      (latest version)          (w/ snapshots)

It seems at this point that the StorJ part is inconsistent and every attempt to fix has failed for me.

I’d say to close this thread for now since I feel it’s going nowhere.
Thanks guys

It appears to be a problem with the Uplink.NET library not supporting concurrent calls on Linux. @TopperDEL is on it.

Duplicati does not really care where the files are located, so you can move the files between storage providers freely. When Duplicati starts a backup it will verify that all files are in place and then continue. This makes it simple to copy/move files from one provider to another, and then edit the backup configuration to point to the new place.

Not sure of the timeline for a fix, as Uplink.NET is not managed by Duplicati.

But if you prefer, you can downgrade to 2.0.8.1, which appears to have a working Storj library: