I have Duplicati running in a docker container on a Raspberry Pi 4 server. I access the web page from my Windows 11 PC using Firefox and Edge. All appears to be working as it should ie I can create backup jobs, save them, run them, restore from them, etc. My problem is that when I return to the Home page after a period of time (hours or days) the configuration(s) have disappeared.
I have tried the LinuxServer and Duplicati docker images, and both the regular and canary builds with the exact same issue.
The container config folder is mapped to a folder outside the container. To rule out any permissions issues I have the config folders set to 777 ie read/write/execute for any user and I run the container as root.
When a backup job first runs I can see the databases being created, and also the Duplicati-server.sqlite is updated.
while I canât help you with Docker, I have seen a similar behaviour with a Linux install where I have done the experimental change of running the server as a regular user (not root) - when the Duplicati server is restarted, often the config is missing. Restarting the Duplicati server fixes (usually) it. Sometimes I need to restart it a second time, or just refreshing the server does the trick. I have never taken the time to search a solution (the Duplicati web server is threatened with forced upgrade because of obsolescence anyway).
The worst I see is delayed filling out of various things that need real data (as opposed to menus etc.), however that usually needs Windows (more specifically, likely the hard drive) to be rather overloaded.
About â System info for example can take awhile before the data for things like version to be filled in.
Assuming this means that constant stuff is there and that browser refresh gets them (maybe delayed), you can watch the action in browser tools, e.g. Edge F12 and refresh. To make it less of a search, Iâve filtered on query I think shows config, which seems to be http://localhost:8200/api/v1/backups
I read about the reboot trick but it didnât have any effect even after multiple reboots. Iâve also tried clearing cache but that doesnât make any difference.
Iâm running docker on an Open Media Vault server that has my hard drives attached. I donât have any scheduled jobs on OMV other than ones that make a copy of my data from one HDD to another.
Can one of the developers tell me where the configs are stored on a standard Linux installation? That might give me some clues.
I wonder if this is related: I attempted to set up a new backup config with just a few files, saving them to my OneDrive (in exactly the same way as my other disappearing configs. However, this time I got this error on the last step
~user/.config/Duplicati where user might be a regular user or root.
Configs in Duplicati-server.sqlite next to job DB on database page.
Both LinuxServer and Duplicati hub.docker.com pages say how to store configs in the host storage.
Saving config from screen 5 writes it, but I donât know why you didnât get error on the earlier configs.
My containerâs âconfigâ files are mapped to host storage as per Dockerâs requirements. I do this the same way for all my containers.
When I compare the âconfigâ folders from the Linux CLI to the same view via Windows Explorer I see a discrepancy. I wonder if this is relevant (note the missing â.configâ folder in the CLI view):
I read somewhere that in order to have write permission to a file in a sub-folder in Linux you need write permissions to the parent folder. Is that true? If so, I wonder if the â.configâ folder doesnât have the correct permissions. How do I unhide the â.configâ in the CLI so that I can check?
--server-datafolder
Duplicati needs to store a small database with all settings. Use this option to choose where the settings are stored. This option can also be set with the environment variable DUPLICATI_HOME.
however the Docker build looks like it might use a different way. I donât use Docker, so canât detail.
I was asking about Duplicati scheduled jobs, i.e. on screen 4 called Schedule. Or is backup all manual?
If it wonât break a scheduled backup, I suppose you could move Duplicati-server.sqlite to another name. See whether you can save a job then. I donât know why Docker is seemingly giving file access problem.
Maybe also copy config elsewhere for extra safety, unless youâve already saved job exports someplace.
Looks like you have three jobs configured at the moment, each having a random-letters.sqlite database.
Youâre rebooting the Duplicati Docker (or its host system), right? This sounds different (and worse) than original report where all appeared to be working unless you left for awhile. But now nothing ever shows?
Given your repeated experiments, what were you doing to get them back? Does that still work?
Given some oddities (from unknown sources) it might be best to export your jobs while you can.
You could then follow up from my previous post and actually import them into the new database.
The randomly named job databases (see database screen for a map) should fill on initial backup.
Duplicati-server.sqlite should exist beforehand, but get a slight edit after backup with its statistics.
You can get a very rough view of whether you added a configuration, as file size will grow slightly.
is probably how an SQLite complaint gets passed to your web browser through Duplicatiâs server.
A question is âAttempt to write a read-only databaseâ cause, e.g. did it open read-only or change?
Attempting a Google search for that found a lot of people talking permissions, but youâre now 777.
Attempting to look at the very few Duplicati cases, it looked like usually permissions, then locking.
That was suspected to be macOS Time Machine holding the file, interfering with Duplicati access.
You can normally use fuser to check for open files, but I donât know how well Docker hides opens.
You could certainly open a shell to the Docker to do manual permission testing, but donât hurt your database until youâre sure you have a direct copy (or some job exports) stashed someplace safeâŚ
Iâve created backup jobs to run both manually and scheduled. On the occasions when I donât receive the read-only error and can save the backup jobs they still disappear.
I simply create a new backup job using the same configuration as the one that disappeared. Then I do a database repair or delete the local database, and then run the job to recreate the local db.