Best way to kick off new backups


Fantastic work with DUPLICATI, im really excited to put it through its paces. I have a question.

I have about 600G of files that need to be backed up. For expedited purposes, it seems to be more efficient to let my PC do the initial crunch of files and then let the NAS server take care of updates. Testing changing the file paths seems to work from one system. My process is
(Local PC)

  • Run duplicati locally in docker mounting target drives
  • Create a backup job to backup the files to a server (shared between PC and NAS)
  • Run the backup
  • Export the backup job to file

(On Nas)

  • Import backup job from file
  • Correct paths in backup file
  • Run command line repair
  • (Verify files can be restored, but don’t restore them)
  • run backup job (Shouldnt change the exported uploaded files much)

Is this the best way? or is there another way that is better?

I got burned once by running a repair and the repair went through and deleted all the dblocks that it didn’t recognize (I backed up a file on the NAS, exported the config, imported the config on the PC, ran a repair, updated the backup config to include other files. Ran a backup on the PC. Went back to the NAS and did a repair - All the dblocks created on the PC were deleted in this process - version Duplicati - . That seems to be a bad default condition IMO, it should abort the process warning you of what it plans to do. To prevent this I : export the backup job, delete the job, import the backup job, run repair. Repair fetches all the data from the server then and populates the database correctly


Welcome to the forum @notzippy

Are they on the PC, on the NAS, or somewhere else? How does NAS phase of this get them?

PC and NAS sides are Duplicati on Linux? That’s best. Mixing operating systems adds issues.
Mixing methods of reading source adds different issues, maybe time stamps and permissions.

Meaning what Duplicati calls “Source Data”? Your server sounds like Duplicati’s “Destination”.

That’s sort of a contradiction, unless you also advocate for some “I really mean it” flag, or dialog.

The manual says this:

which basically sounds like your intention of backing up to one server “shared between PC and NAS)”.

Duplicati makes use of a local database for each backup job that contains information about what is stored at the backend.

The local database and destination are thus tied together. Having one destination and two databases is playing with fire. It can be carefully done to a limited extent (which sounds like your newest plan), as it’s similar to migrating to a different machine. Once migrated, make sure the original system stays away…

There is are several open issues on this hazard. It also happens if people reinstall an old DB somehow.
My notes and proposal on how Repair may try to assess the situation is below, in queue for developers.
duplicati- just destoryed one month worth of backup #4579

Two others waiting for developers (any developers around?) are below, and there might be some more.

Repair command deletes remote files for new backups #3416

Duplicati silently, permanently deleted backup from google drive - two-machine use case #3845

I would love it if somebody would implement my proposal, but some community member has to step up. Duplicati only exists and improves through volunteer effort, and there are very few volunteers right now.

Regarding the ‘auto’-delete issue, yeah I totally agree that’s a huge problem. I’m glad I discovered that issue when it wasn’t mission-critical. I’m glad to see that at least the bug-report is still open.

Regarding your question: Have you tried just coping the databases? Duplicati runs on the same codebase, the same compile on both windows and a linux system using docker, via the mono-program. So maybe it’s possible to just copy the databases, and adjust the selected folders. If it scans successfully afterwards I’m confident it must be working. I would dare say, it couldn’t possible accept a file on the NAS as being under backup if it wasn’t, because it must have a hash that matches.

I would also say that it would have to be significantly faster to bother with this method. Otherwise I strongly recommend that you avoid such a hack, because there can be issues that end up taking up more of your time and mean you have to start over. Here I’m just talking based on my general experience with using hacks, using software in a way in which the developers didn’t intend it. However in this case the developers definitely intended that you can restore a backup made from another location, so if you don’t do my DB-copy trick, I think it’s pretty save.

1 Like