Hi
Fantastic work with DUPLICATI, im really excited to put it through its paces. I have a question.
I have about 600G of files that need to be backed up. For expedited purposes, it seems to be more efficient to let my PC do the initial crunch of files and then let the NAS server take care of updates. Testing changing the file paths seems to work from one system. My process is
(Local PC)
- Run duplicati locally in docker mounting target drives
- Create a backup job to backup the files to a server (shared between PC and NAS)
- Run the backup
- Export the backup job to file
(On Nas)
- Import backup job from file
- Correct paths in backup file
- Run command line repair
- (Verify files can be restored, but donāt restore them)
- run backup job (Shouldnt change the exported uploaded files much)
Is this the best way? or is there another way that is better?
I got burned once by running a repair
and the repair went through and deleted all the dblocks that it didnāt recognize (I backed up a file on the NAS, exported the config, imported the config on the PC, ran a repair
, updated the backup config to include other files. Ran a backup on the PC. Went back to the NAS and did a repair
- All the dblocks created on the PC were deleted in this process - version Duplicati - 2.0.6.3_beta_2021-06-17) . That seems to be a bad default condition IMO, it should abort the process warning you of what it plans to do. To prevent this I : export the backup job, delete the job, import the backup job, run repair
. Repair fetches all the data from the server then and populates the database correctly
Nz