Windows 7 is dead! Migrating Duplicati to my Linux Mint

So Win7 has ceased to be.
I have installed Linux Mint on a new ssd, and kept my data drive which was E:\. This is now permanently mounted on /mnt/my_data/. I now would like to copy over the Duplicati job.

I exported the job to a JSON file. Did a mass find and replace in a text editor changing E:\\ to /mnt/my_data/ (the backslash is ‘escaped’ with another \ hence why there’s two)
I did the same for \\ to /

  • so now E:\some_dir\somefile.txt has become /mnt/my_data/some_dir/somefile.txt
    I cleaned up any references to special windows folders e.g. %My_Documents% similarly
    I have found the sql lite database and could copy it over.

BUT
When I import the new edited JSON file will it play nicely with the .sqlite database, or are original paths saved there too? Are original paths stored on the backup server data?

I could easily get an sqlite tool and run a few queries on the database to change this. Do I need to? I would like to avoid a complete rebuild of my backup set - which is on a NAS drive.

Thanks for your input.

Steve

I see the paths are in the sqlite database. Thus I have a question; Are the paths also encoded into the backup files themselves? It apears file IDs are used?

If so would it matter if the path in the local database has changed? Or would the backup copies be completely over written?

Welcome to the forum @TallSteve

How the backup process works shows the filelist.json that’s in the dlist file.That’s probably the big pain.

While converting paths can be done with a sophisticated find-and-replace, and the dlist zip file updated and AES-encrypted, there’s also a Base64 of the SHA-256 hash of the dlist file in Remotevolume table. Going through the steps to edit that is possible, but I don’t know how many dlist files you’d need to “fix”.
–skip-file-hash-checks might be the way to temporarily suppress the usual hash check of hacked dlists.

For simple restore from old drive, I think it will restore the files to some other Linux folder of your choice without any messing around at all, e.g. by using direct restore. Windows permissions are lost of course.

If you want to backup continued changes of old drive, simplest solution is to start a new backup, e.g. by importing your edited JSON config export, then recent restores are easy, and old versions as by above.

If you’re really adventurous, and really want to avoid uploading things again (and keeping two copies of basically the same files), you might be able to get Duplicati block-level deduplication to attach still-valid blocks from the destination, just as would happen if you moved a file to a different folder. Its data blocks will be determined, found to already exist, and referenced without having to upload them again. The old location looks like a deletion though, so be careful about retention policy, or you might lose old versions.

I think I once played with a workaround for a totally missing initial dlist file (it goes up late) by building a dlist file with empty JSON (maybe []) for filelist.json, then running Database Recreate to record blocks. Setting –no-auto-compact ensures it won’t reclaim any “wasted space”, then backup to reattach blocks. This scheme isn’t directly well-suited for your use, but you might be able to figure out how to adapt it…

If you get adventurous with this, backing up current backup could be done as a precaution against loss.

Thanks for that. I’ll think about what to do next.
Steve

Further remarks:

Although it’s a pain to mess with dlist files, the editing of the DB to match the new hash can perhaps be avoided if a Recreate of DB from hacked files is done. Save your old DB, just in case a problem occurs.

Another loophole that may help is that dlist files can be rebuilt from DB by deletion then doing a Repair, thus if your DB editing skills are good, you might be able to convert all paths in one go then make dlists.

Any of these conversions is still quite ambitious.