I’m not following this. You copied files to the NAS, but can’t copy back? If a permission problem, fix that. This sounds like just a local file permission problem or maybe that plus something related to using NAS.
I suppose you may lock Duplicati out, so don’t do that. You can even set permissions back after copy in.
https://hub.docker.com/r/linuxserver/duplicati Does that describe yours? If so, how’s your config done? You possibly can use the old Duplicati-server.sqlite if job configuration hasn’t changed. Don’t use an old job database, as it must match the destination files (dlist, dblock, and dindex) that you put into backups.
For safety, you can rename the old job database (just in case), then run Repair to create a current one.
Duplicati manages a local database that is keeping track of what is on the backend (your NAS). When you restored your computer from an image, you got back an old version of the database. This database can’t know what was backuped after the image date. As @ts678 said:
You can get the old job database from the Duplicati user interface (select job then Advanced / database / Location)
If you have the LinuxServer produced Docker (not the Duplicati produced Docker), we can help less, however at least we know what you have. For example, you have a backups folder. What of config? Look through the page and see if you can figure out where yours is. Your database is in there, if you
have something like the LinuxServer layout. If not, your database must be somewhere on the server. Brute-force way to look for it is to search for the name you see on the Duplicati GUI database screen knowing that Duplicati is inside a container and doesn’t know where the Docker put the data on host.
Too vague with no description of errors, but this is one way to destroy a backup. I see sign you didn’t:
Again no description, but if it complained about extra files, you somehow managed to not delete them.
The hazard is that it considers them extra files because you have wiped out its memory by installing an older version of its database. If you run a Repair to fix the problem, it fixes it by deleting the newer files. Fortunately I assume you still have an intact copy on the NAS, otherwise this would be permanent loss.
That’s the settings database which maybe is old but close enough. If it’s too outdated, delete and enter data manually, but some things have to match (e.g. encryption), so take good notes if you re-enter that.
Look next to it in /opt/duplicati/config for a job database. It will have random characters for its file name.
Whatever the random name is, it should match, however the path parts before the name will be different.
Having found the old database, do the delete (or rename) and use Repair on database screen to rebuild.
This was described twice before. Is there anything more that you need to actually perform the operation?
Saving the entire current config folder in addition to the backups folder would be better, but too late now. Possibly you were lucky you could save backups, as a different type of crash might have destroyed that. Generally backing up to a different system (maybe even remote) is best. Generally one can recreate the local job database (sometimes it has issues though). Protecting the job configuration can be done using Export To File and then save the exported configuration somewhere safe, in case job must be recreated.