Docker image getting full on Unraid, but only with new UI

I’ve been using it for years, without any issues. Today, I had to restore a pretty large file, and noticed that there was a new UI (ver 2.2.0.0_stable_2025-10-23). As soon as the restore process started, I started getting notifications that docker image utilization was getting very high, so I stopped the restore process. I’m assuming that it’s trying to cache the data somewhere internally, instead of using the mount points, but I couldn’t easily find where that was.

I then noticed the option to switch to the old UI, so I did that. The moment it switched back, docker image utilization dropped to normal levels. I proceeded to do the restore again, and it worked just fine. So, there’s obviously something with the new UI that’s incorrectly writing data to an internal location.

I’m wondering if there is some kind of additional folder mount that needs to be created for the new UI to work correctly. I’m using Unraid, so using their docker image, with folder mounts for /backups, /source, and /config. Is there another mount path that I need to manually add? Or if not, how can I fix it to get the new UI working?

Welcome to the forum @BestITGuys

Blog post: Cut restore times by 3.8x - A deep dive into our new restore flow
might describe cache you’re seeing, but it should run with either UI AFAIK. It did when I tried.

Restore fills up /tmp directory #6577 starts my thoughts on the new need for more /tmp (sigh).
tempdir option can move the location for it, or legacy-restore can get the old restore way.
There’s also a restore-volume-cache-hint cooking in Canary, but all this seems a bit hard.

I could be wrong about some of this, and it would be better to have some developer comment.

Possibly this is all off-target. Your comment about the old UI not having issue is puzzling to me.
One way to distinguish is that a live log at verbose level looks much different between methods.

Thanks for the references, and it definitely looks like a problem with temp and/or cache storage growing too large with the new restore flow. Any idea where that location actually is inside the Docker container? I’m not sure that the devs will be able to do anything about it (assuming that the new flow actually requires a lot more temp storage), but the easy solution (at least for now) would be to just mount that internal path from the container to an external folder, like the other mount points

where is tmp in a docker container

asked to Google gives AI overview

The /tmp directory in a Docker container is located at the root of the container’s filesystem, similar to a standard Linux system.

but then talks about mounting from host into the container, maybe even directly onto /tmp.

I don’t use Docker, so can’t say much more. You could maybe just go in and look at /tmp?

the folder is just /tmp, but that was the first thing I did – I opened the console and ran du -hs /tmp to see how much space it was taking up, but it was empty

I figured out where the data is being written inside the docker image – it’s in /run/duplicati-temp, so I got it working by mounting that path to an external folder, and that’s a pretty workable solution.

However, I would think that the data should be written to /tmp instead, just to make it easier to find. And either way, it should either be explicitly documented in the Docker docs, so that the whichever folder ends up storing the temp data gets mapped to an external share

Great that you got it working!

From Duplicati’s perspective it always uses the system defined temp folder by default, so something has changed the temp folder in the image.

Since the image is from LinuxServer, I suggest you open an issue on their repo and request the change.