That helps. I caught the interruption part you wrote earlier, but it wasn’t clear the database save was then.
I gave you a method of getting Duplicati to do that by having its backup complete. You don’t want to try it? Because it sounds like you won’t miss the backup if it gets damaged, you could even omit safeguards.
Just move your old database in, make a proper destination config and a minimal source one, and backup.
Mostly, although some things (such as the passphrase for the backup) are intentionally stored separately.
Independent restore program points to a Python script to do restore without depending on Duplicati code, however it appears to start with dlist files (which you don’t have). If I had to try to do a recovery by hand, I would probably try to do it by building a dlist file, then letting Duplicati go off to actually gather all the data.
The basics of the file format are in the “How the * process works” articles that I mentioned above, and the primary contents are a filelist.json file as described in the article. I can give non-authoritative DB advice… Alternatively, do your own exploring of the database. I suggest tables Fileset, File, Blockset, Metadataset.
Making a local test backup of a small file and one larger than 100k would be a good warm-up experiment.
Below is a dlist entry from mine, so basically the job is to get the right values from the DB and write JSON.
{“type”:“File”,“path”:“C:\backup source\length1.txt”,“hash”:“a4ayc/80/OGda4BO/1o/V0etpOqiLx1JwB5S3beHW0s=”,“size”:1,“time”:“20181204T200947Z”,“metahash”:“5Rc4hdEFxvYIaXOfV7VteFa2hb5MVqWJxpPAiWG2MJk=”,“metasize”:137}
I hope you have some fun whichever way you go. If you come up with a useful tool, feel free to contribute. Developers and GitHub will probably be the pathway to actually get your tool put into the source archives.