I previously had been running Duplicati quite nicely on my TrueNAS system. However that all changed when the jail went sideways. Plugins and addons are going to be deprecated anyway so I decided it was a good time to move on. I altered a second server to be just a rsync target from my TrueNAS server with the intention being that I can preserve the path and permissions no problem. And, it works perfectly. Thank you 40 year old software. haha. So that gives me some basic first line DR.
I deployed the docker image and copied my sqlite and config files over and Duplicati spun up just fine and I can see my backup history and everything seemed ok. BUT, on TrueNAS I was mapping paths into the jail. I completely forgot about that. So:
But I really don’t want to do that now. I can absolutely replicate it via docker configuration. Easy. But what I would really rather have is /mnt/vol01 simply mapped directly into the container so then I can pick and choose as needed versus stopping the container, adding the volume to the composer file, restarting the container, blah blah.
So. Is there thoughts on how I can mask that path? Update it? Voodoo? Dark wizarding stuff? $20?
It’s not the end of the world obviously. I might even go so far as to just refresh my whole backup and kinda start over.
Since your OS is compatible, it should just work if you change your paths in the configuration.
It will be slow for the first backup, because it needs to read all files again and probably the metadata is changed, so it might need to upload that. But the content is already in the backup, so it will not be stored again.
On the restore side you will just have a break where the old versions have old paths and any new versions have the new paths, but the common path prefix is stripped anyways if you select a target folder.
From a Duplicati point of view, you can alter the Source path as you wish. Data deduplication
will prevent duplicate storage of the files’ contents, but you’ll have two sets of paths for awhile.
Depending on retention policy, eventually the old paths may vanish, but the data will remain if
new paths have the same data. It’s sort of like hard links where two names use same content.
This approach is also known as deduplication ensuring that each “chunk” of data is stored just only once. With this approach, duplicate files are detected regardless of their names or locations.
EDIT:
Irrelevant comment: Duplicate words were detected in the “stored just only once” in the manual.