Restautation problem

Hello,

I installed duplicate in home assistant via container (Ubuntu server) since November 2022.
I have scheduled backups locally (every night) in the backups directory.

My server crashed on June 8, 2023.
As I had ghosted the server (via Acronis) in November 2022, I restored it on June 10, 2023 and everything works perfectly.

Of course I am missing all the data between November 2022 and June 8, 2023,
Before the restoration I managed to copy this data from the backups directory of duplicates, to one of my NAS.

After the restoration I thought I could copy data locally in the backups directory of duplicati: but it is impossible because I do not have the rights.

Should the rights associated with the backups directory be changed? Could this affect the operation of duplicates?

Is there another solution ?

Thanks in advance for your suggestions

Best regards

Welcome to the forum @jacques

I’m not following this. You copied files to the NAS, but can’t copy back? If a permission problem, fix that. This sounds like just a local file permission problem or maybe that plus something related to using NAS.

I suppose you may lock Duplicati out, so don’t do that. You can even set permissions back after copy in.

https://hub.docker.com/r/linuxserver/duplicati Does that describe yours? If so, how’s your config done? You possibly can use the old Duplicati-server.sqlite if job configuration hasn’t changed. Don’t use an old job database, as it must match the destination files (dlist, dblock, and dindex) that you put into backups.

For safety, you can rename the old job database (just in case), then run Repair to create a current one.

The problem is more vicious.

The server being crashed, I realized that the backup to the NAS had not worked. I copied (via a live cd) the local backup duplicati content to my NAS directory all backups made until June 2023.

I perform disk restore with Acronis.
I find my duplicate instance OK.

  1. I copied the NAS backup directly into the local “backup” directory, having previously modified the rights.

When I restart duplicati, it finds errors and asks me if I want to repair. I say yes, it continues and gets errors trying to process .aes files that date after November 2022.

  1. I tried a restore (duplicati function) from NAS directory to local backup directory
    But duplicati only shows me backups up to November 2022.

What I think is that the current instance of duplicate thinks that the backups made between November 2022 and June 2023 were not created by it and for safety it rejects them…

Duplicati manages a local database that is keeping track of what is on the backend (your NAS). When you restored your computer from an image, you got back an old version of the database. This database can’t know what was backuped after the image date. As @ts678 said:

You can get the old job database from the Duplicati user interface (select job then Advanced / database / Location)

I installed duplicati with a container via Portainer in Home assistant. The interface in Portainer does not tell me how to access the database.

Please read and answer the below (repeating):

If you have the LinuxServer produced Docker (not the Duplicati produced Docker), we can help less, however at least we know what you have. For example, you have a backups folder. What of config? Look through the page and see if you can figure out where yours is. Your database is in there, if you
have something like the LinuxServer layout. If not, your database must be somewhere on the server. Brute-force way to look for it is to search for the name you see on the Duplicati GUI database screen knowing that Duplicati is inside a container and doesn’t know where the Docker put the data on host.

Too vague with no description of errors, but this is one way to destroy a backup. I see sign you didn’t:

Again no description, but if it complained about extra files, you somehow managed to not delete them.

The hazard is that it considers them extra files because you have wiped out its memory by installing an older version of its database. If you run a Repair to fix the problem, it fixes it by deleting the newer files. Fortunately I assume you still have an intact copy on the NAS, otherwise this would be permanent loss.

duplicati-2.0.6.3-2.0.6.3_beta_20210617 just destoryed one month worth of backup #4579
has one proposed solution for making Duplicati smarter (despite its memory wipe) at avoiding the issue.

Actually I must have the LinuxServer produced Docker . See inside the docker-compose.yaml file :

duplicati:
image: lscr.io/linuxserver/duplicati:latest
container_name: duplicati
environment:
- PUID=0
- PGID=0
- TZ=Europe/Paris
- CLI_ARGS= #optional
volumes:
- /opt/duplicati/config:/config
- /backups:/backups
- /opt/:/source
ports:
- 8200:8200
restart: unless-stopped

I found the database: /opt/duplicati/config/Duplicati-server.sqlite.

The worst part is that I had transferred this file to my Synology NAS before restoring my disk with a ghost, but the NAS automatically deleted it during the transfer.

In my future saves I will save it separately for safety.

Most of my lost data was node-red data and I had backed up the flow.json. So I was able to recover 95% of my work

Thank you for taking the time to respond to me.

Best regards.

That’s the settings database which maybe is old but close enough. If it’s too outdated, delete and enter data manually, but some things have to match (e.g. encryption), so take good notes if you re-enter that.

Look next to it in /opt/duplicati/config for a job database. It will have random characters for its file name.

Whatever the random name is, it should match, however the path parts before the name will be different.

Having found the old database, do the delete (or rename) and use Repair on database screen to rebuild.
This was described twice before. Is there anything more that you need to actually perform the operation?

Saving the entire current config folder in addition to the backups folder would be better, but too late now. Possibly you were lucky you could save backups, as a different type of crash might have destroyed that. Generally backing up to a different system (maybe even remote) is best. Generally one can recreate the local job database (sometimes it has issues though). Protecting the job configuration can be done using Export To File and then save the exported configuration somewhere safe, in case job must be recreated.