Backup configs disappear

I have Duplicati running in a docker container on a Raspberry Pi 4 server. I access the web page from my Windows 11 PC using Firefox and Edge. All appears to be working as it should ie I can create backup jobs, save them, run them, restore from them, etc. My problem is that when I return to the Home page after a period of time (hours or days) the configuration(s) have disappeared.

I have tried the LinuxServer and Duplicati docker images, and both the regular and canary builds with the exact same issue.

The container config folder is mapped to a folder outside the container. To rule out any permissions issues I have the config folders set to 777 ie read/write/execute for any user and I run the container as root.

When a backup job first runs I can see the databases being created, and also the Duplicati-server.sqlite is updated.

Aaarrgghh. What is going on!! Any ideas?


while I can’t help you with Docker, I have seen a similar behaviour with a Linux install where I have done the experimental change of running the server as a regular user (not root) - when the Duplicati server is restarted, often the config is missing. Restarting the Duplicati server fixes (usually) it. Sometimes I need to restart it a second time, or just refreshing the server does the trick. I have never taken the time to search a solution (the Duplicati web server is threatened with forced upgrade because of obsolescence anyway).

The worst I see is delayed filling out of various things that need real data (as opposed to menus etc.), however that usually needs Windows (more specifically, likely the hard drive) to be rather overloaded.

About → System info for example can take awhile before the data for things like version to be filled in.

Assuming this means that constant stuff is there and that browser refresh gets them (maybe delayed), you can watch the action in browser tools, e.g. Edge F12 and refresh. To make it less of a search, I’ve filtered on query I think shows config, which seems to be http://localhost:8200/api/v1/backups

Once filled, I don’t know why it would unfill, but maybe you can spot the culprit by monitoring, as above.

I suppose I should ask what happens while you’re away, e.g. are scheduled jobs running then?

I read about the reboot trick but it didn’t have any effect even after multiple reboots. I’ve also tried clearing cache but that doesn’t make any difference.

I’m running docker on an Open Media Vault server that has my hard drives attached. I don’t have any scheduled jobs on OMV other than ones that make a copy of my data from one HDD to another.

Can one of the developers tell me where the configs are stored on a standard Linux installation? That might give me some clues.

I wonder if this is related: I attempted to set up a new backup config with just a few files, saving them to my OneDrive (in exactly the same way as my other disappearing configs. However, this time I got this error on the last step


~user/.config/Duplicati where user might be a regular user or root.
Configs in Duplicati-server.sqlite next to job DB on database page.
Both LinuxServer and Duplicati pages say how to store configs in the host storage.

Saving config from screen 5 writes it, but I don’t know why you didn’t get error on the earlier configs.

My container’s ‘config’ files are mapped to host storage as per Docker’s requirements. I do this the same way for all my containers.

When I compare the ‘config’ folders from the Linux CLI to the same view via Windows Explorer I see a discrepancy. I wonder if this is relevant (note the missing ‘.config’ folder in the CLI view):

This is the view via Windows Explorer

I read somewhere that in order to have write permission to a file in a sub-folder in Linux you need write permissions to the parent folder. Is that true? If so, I wonder if the ‘.config’ folder doesn’t have the correct permissions. How do I unhide the ‘.config’ in the CLI so that I can check?

From ls manual page:

-a, –all
do not ignore entries starting with .

But you can cd there blind, knowing it’s just a hidden file to ls.
It might be interesting to see what’s kept in there (if anything).

Note that you asked for a standard Linux system location, but
config files can be put anywhere. It looks like some right there.


Duplicati needs to store a small database with all settings. Use this option to choose where the settings are stored. This option can also be set with the environment variable DUPLICATI_HOME.

however the Docker build looks like it might use a different way. I don’t use Docker, so can’t detail.

This is the view from Windows Explorer. The folders are all empty

I was asking about Duplicati scheduled jobs, i.e. on screen 4 called Schedule. Or is backup all manual?

If it won’t break a scheduled backup, I suppose you could move Duplicati-server.sqlite to another name. See whether you can save a job then. I don’t know why Docker is seemingly giving file access problem.
Maybe also copy config elsewhere for extra safety, unless you’ve already saved job exports someplace.
Looks like you have three jobs configured at the moment, each having a random-letters.sqlite database.

You’re rebooting the Duplicati Docker (or its host system), right? This sounds different (and worse) than original report where all appeared to be working unless you left for awhile. But now nothing ever shows?

Given your repeated experiments, what were you doing to get them back? Does that still work?

Given some oddities (from unknown sources) it might be best to export your jobs while you can.

You could then follow up from my previous post and actually import them into the new database.

The randomly named job databases (see database screen for a map) should fill on initial backup.

Duplicati-server.sqlite should exist beforehand, but get a slight edit after backup with its statistics.
You can get a very rough view of whether you added a configuration, as file size will grow slightly.

is probably how an SQLite complaint gets passed to your web browser through Duplicati’s server.
A question is “Attempt to write a read-only database” cause, e.g. did it open read-only or change?

Attempting a Google search for that found a lot of people talking permissions, but you’re now 777.
Attempting to look at the very few Duplicati cases, it looked like usually permissions, then locking.
That was suspected to be macOS Time Machine holding the file, interfering with Duplicati access.

You can normally use fuser to check for open files, but I don’t know how well Docker hides opens.

You could certainly open a shell to the Docker to do manual permission testing, but don’t hurt your database until you’re sure you have a direct copy (or some job exports) stashed someplace safe…

Yes, I’ve tried restarting the container and rebooting the raspberry pi host

I’ve created backup jobs to run both manually and scheduled. On the occasions when I don’t receive the read-only error and can save the backup jobs they still disappear.

I simply create a new backup job using the same configuration as the one that disappeared. Then I do a database repair or delete the local database, and then run the job to recreate the local db.

Yes, I can see that happening. As I said in my original post, the backups work correctly and I can successfully restore files.

I’ll give that a try.

All looks good here:

Nope. Still stuck at the read-only error