Best way to kick off new backups

Hi

Fantastic work with DUPLICATI, im really excited to put it through its paces. I have a question.

I have about 600G of files that need to be backed up. For expedited purposes, it seems to be more efficient to let my PC do the initial crunch of files and then let the NAS server take care of updates. Testing changing the file paths seems to work from one system. My process is
(Local PC)

  • Run duplicati locally in docker mounting target drives
  • Create a backup job to backup the files to a server (shared between PC and NAS)
  • Run the backup
  • Export the backup job to file

(On Nas)

  • Import backup job from file
  • Correct paths in backup file
  • Run command line repair
  • (Verify files can be restored, but donā€™t restore them)
  • run backup job (Shouldnt change the exported uploaded files much)

Is this the best way? or is there another way that is better?

I got burned once by running a repair and the repair went through and deleted all the dblocks that it didnā€™t recognize (I backed up a file on the NAS, exported the config, imported the config on the PC, ran a repair, updated the backup config to include other files. Ran a backup on the PC. Went back to the NAS and did a repair - All the dblocks created on the PC were deleted in this process - version Duplicati - 2.0.6.3_beta_2021-06-17) . That seems to be a bad default condition IMO, it should abort the process warning you of what it plans to do. To prevent this I : export the backup job, delete the job, import the backup job, run repair. Repair fetches all the data from the server then and populates the database correctly

Nz

Welcome to the forum @notzippy

Are they on the PC, on the NAS, or somewhere else? How does NAS phase of this get them?

PC and NAS sides are Duplicati on Linux? Thatā€™s best. Mixing operating systems adds issues.
Mixing methods of reading source adds different issues, maybe time stamps and permissions.

Meaning what Duplicati calls ā€œSource Dataā€? Your server sounds like Duplicatiā€™s ā€œDestinationā€.

Thatā€™s sort of a contradiction, unless you also advocate for some ā€œI really mean itā€ flag, or dialog.

The manual says this:

which basically sounds like your intention of backing up to one server ā€œshared between PC and NAS)ā€.

Duplicati makes use of a local database for each backup job that contains information about what is stored at the backend.

The local database and destination are thus tied together. Having one destination and two databases is playing with fire. It can be carefully done to a limited extent (which sounds like your newest plan), as itā€™s similar to migrating to a different machine. Once migrated, make sure the original system stays awayā€¦

There is are several open issues on this hazard. It also happens if people reinstall an old DB somehow.
My notes and proposal on how Repair may try to assess the situation is below, in queue for developers.
duplicati-2.0.6.3-2.0.6.3_beta_20210617 just destoryed one month worth of backup #4579

Two others waiting for developers (any developers around?) are below, and there might be some more.

Repair command deletes remote files for new backups #3416

Duplicati silently, permanently deleted backup from google drive - two-machine use case #3845

I would love it if somebody would implement my proposal, but some community member has to step up. Duplicati only exists and improves through volunteer effort, and there are very few volunteers right now.

Regarding the ā€˜autoā€™-delete issue, yeah I totally agree thatā€™s a huge problem. Iā€™m glad I discovered that issue when it wasnā€™t mission-critical. Iā€™m glad to see that at least the bug-report is still open.

Regarding your question: Have you tried just coping the databases? Duplicati runs on the same codebase, the same compile on both windows and a linux system using docker, via the mono-program. So maybe itā€™s possible to just copy the databases, and adjust the selected folders. If it scans successfully afterwards Iā€™m confident it must be working. I would dare say, it couldnā€™t possible accept a file on the NAS as being under backup if it wasnā€™t, because it must have a hash that matches.

I would also say that it would have to be significantly faster to bother with this method. Otherwise I strongly recommend that you avoid such a hack, because there can be issues that end up taking up more of your time and mean you have to start over. Here Iā€™m just talking based on my general experience with using hacks, using software in a way in which the developers didnā€™t intend it. However in this case the developers definitely intended that you can restore a backup made from another location, so if you donā€™t do my DB-copy trick, I think itā€™s pretty save.

1 Like