Restore Encryption file i had forget Passphrase, so i need to restore the file without passphrase, Is any way to restore it. Backup has taken 2 year before now i try to restore the file it has PassPhrase , but i had forget it.
Can you describe your situation further? Do you have a total loss of original system drive data, and only files at some backup destination, with filenames ending in .zip.aes? And do you have no configuration export to a file or for command line or .bat file or other script? Is anything but encrypted files available?
AES Crypt is probably a faster more user-friendly password-guess tool if you have to resort to guessing. Sometimes people reuse passwords. Not good for security, but maybe one you use elsewhere will work. Would there be anyone else who used the system who would be willing to help guess at the password?
What Duplicati version are you trying a restore with?
I have a similar situation - I can’t seem to get past the passphrase prompt.
Scenario: Duplicati on docker image atop unraid, machine-to-machine backup via local network. Both machines configured as similarly as possible backing up their files to each other. One complete backup set.
A year later too many of the machines (call it machine A) drives went bad totally bad - too many bad sectors, hence the duplicati config on it is not readable. So, attempting a direct restore of A’s files from backup on the working machine (call it machine B), using the same setup on machine B.
It doesn’t seem to like my passphrase, so I guess either its passphrase was different, or something else is wrong.
What I have available to me:
What I THINK is the passphrase.
the contents of the backup folder, including its .ssh folder in the backup destination which has the authorized_keys file - (which I don’t understand the contents of but I am not sure that it can be used if the encryption is one-way only?)
The SQLite install files and configuration of machine B - which may be similar.
I did poke around at a sqlite file on B and saw that there’s a configuration table that has the following: perhaps it can be used as a way to back it out?
My illogical? hope is that someone can tell me a way to somehow get my passphrase out of the above, or get around it somehow, so that I can restore my father’s files.
@Zach_Baker are you talking about the web UI password prompt? If so, stop Duplicati and start it with this additional command line option:
It will clear the password from the database from my limited testing. Remove that command line option after it’s been cleared so that you can set a new password if desired.
If you are talking about the backup encryption password, and you have access to Duplicati-server.sqlite, you should be able to see the password in the ‘Option’ table. Find the value for ‘passphrase’ where the BackupID >= 1. The BackupID number corresponds to the backup jobs defined in the main UI.
“If you are talking about the backup encryption password, and you have access to Duplicati-server.sqlite, you should be able to see the password in the ‘Option’ table. Find the value for ‘passphrase’ where the BackupID >= 1”
It seems that I used the same passphrase between the two setups. Yeay! Thanks so much, this is going to mean a lot to someone to get their files back.
Well, since it seemed to be making no progress, I left it running for 5 days while out of town. And it seems we have a partial success. Documents folder recovered, but pictures folder/backup not.
220.127.116.11_beta_2019-07-14 Is the version I’m using.
I am wondering if there is something I can do to have it attempt once more to rebuild the local database or something, and maybe just focus on the folders that are empty after restore (which were there, at least top-level, I didn’t look before trying the initial restore, I told it to restore everything)…thoughts?
Thanks. Log below
The log is pretty slim:
Nov 26, 2019 4:52 PM: Failed while executing “ListRemote” with id: 2
Nov 26, 2019 4:50 PM: Error in worker
Nov 26, 2019 4:50 PM: Failed while executing “Backup” with id: 2
Nov 26, 2019 9:18 AM: Failed while executing “RepairUpdate” with id: 5a98eb1a-f5ef-4b84-b2ff-29236eee241e
Nov 26, 2019 9:18 AM: Failed while executing “Restore” with id: b823ccff-ec11-44a5-a6e4-32356530652c
Are you sure that’s About / Show Log / Live / Information (last one needs picking a level on dropdown)?
It looks more like About / Show log (and stop) which shows the Stored log from the server, but one nice thing about that one is you can click on an item and often get some detailed information about the item.
What was expected was like the below, except with more downloads and maybe also with some errors:
Dec 2, 2019 7:00 PM: Backend event: Get - Completed: duplicati-b40d8871c0ef14c82b55ba40d04d4bbd4.dblock.zip (1,002.24 KB)
Dec 2, 2019 7:00 PM: Backend event: Get - Started: duplicati-b40d8871c0ef14c82b55ba40d04d4bbd4.dblock.zip (1,002.24 KB)
Dec 2, 2019 7:00 PM: 18 remote files are required to restore
Dec 2, 2019 7:00 PM: Searching backup 0 (11/13/2019 12:01:37 PM) ...
Dec 2, 2019 7:00 PM: Backend event: List - Completed: (109 bytes)
Dec 2, 2019 7:00 PM: Backend event: List - Started: ()
Dec 2, 2019 7:00 PM: The operation Restore has started
You can set slightly higher (Retry) or hugely higher (Profiling) levels to increase the amount of logging. Seeing the files downloading is a good companion to “Downloading files” status, to see actual actions.
Assuming you ran the Restore from the backup job, there may also be a job log under job’s Show log, which would give you additional information. Did restore fail, finish, or was it still running after 5 days?
Does “once more” mean you tried before, e.g. with Recreate button or (maybe safer) manual rename then Repair button? If you ever do that, and it gets to the 70% - 100% range on progress, and is very slow at that point, and is downloading dblock files (see live log), then there’s a good chance the speed issue can be solved by doing it with a release with the fix, e.g. 18.104.22.168 Canary, however downgrading onto 22.214.171.124 Beta won’t be possible. You can change Settings to Beta channel, then wait for next one.
For a limited database rebuild, you can also try direct restore to make a partial temporary database for what you requested restored. It’s possible (but not certain) that it will avoid the 70% - 100% slowdown.
Following your directions on the log, there’s actually only two lines:
Dec 2, 2019 8:39 AM: The operation Repair has completed
Dec 2, 2019 8:39 AM: Recreate/path-update completed, not running consistency checks
Looks like the “Live” window has passed. Probably due to an update of Duplicati.
Digging further on the logs in the stored view, I see nothing that doesn’t probably line up with me killing it when it seemed nothing was getting done fast, which was before I understood the bug in the system that can delay the restore process. Even these two entries probably line up with me killing it to start it over:
Nov 26, 2019 4:52 PM: Failed while executing “ListRemote” with id: 2
System.Net.Sockets.SocketException (0x80004005): No route to host
at Duplicati.Library.Main.BackendManager.List () [0x00049] in :0
at Duplicati.Library.Main.Controller.b__22_0 (Duplicati.Library.Main.ListRemoteResults result) [0x00055] in :0
Nov 26, 2019 4:50 PM: Error in worker
System.Threading.ThreadAbortException: Thread was being aborted.
at (wrapper managed-to-native)
There are no job logs, as the only scenario I can work with here is a direct restore. And since it is only a direct restore option, I don’t know how to answer your “once more” question as I certainly don’t see any “recreate” or “repair” options.
My question is simply if I am starting from a direct restore, what local databases do I need to kill so that it truly starts from fresh, in case something was goofed the first time I did the direct restore, when pointing to this folder content:
No DB kill needed AFAIK. Direct restore will show “Building partial temporary database”, and DB is not saved for future uses. If you wanted it saved (only relevant for repeated repeated restores or backups), Database management for the job shows the buttons where you could Recreate or rename and Repair.
Live log would work either way, job log is in the job DB (so doesn’t persist long for direct restore), and a log that you create for yourself to a text file will work either way (though where you set it up is different).
Why is that? I know there was some weirdness with the encryption password, but the direct restore has the same encryption and destination information as a regular job, so should be enough to recreate your original job for the restore if it got lost. Just don’t have two jobs actually back up to the same destination. A direct restore is safer in that way, because it can never back up, so can never damage the destination.
And the look at the destination screenshot adds a new angle, as by default dblock files should be 50MB and numerous so the Restore can download just the ones it needs. There are no partial-file downloads. Choosing sizes in Duplicati talks about this, but your 2018 backup might be worded differently than this:
Though I’m still curious what the error messages would say, there’s less that can be done to resolve issues when the entire backup is basically a single data file, so maybe it’d be easiest to just attempt Recovering by using the Duplicati Recovery tool to see what it makes. It’s more tolerant of problems, however it’s probably worse at detailed warnings and errors (which previous effort was trying to get).
Get the backup job fixed, trying not to lose any data, e.g. save or rename DB before rebuilding.
Direct restore again. Might be pointless except for possible ability to see or log error messages.
Duplicati.CommandLine.RecoveryTool.exe which is a less particular tool which might get more.
but the direct restore has the same encryption and destination information as a regular job, so should be enough to recreate your original job for the restore if it got lost.
That is good to know, but honestly I my concern is file recovery and I don’t care about restoring the original job. Duplicati turned out to be a poor site-to-site solution for two UnRaid setups due to the challenges of NAT that Duplicati doesn’t try to solve - that is the killer feature we needed and the FTP solutions we came up with were too brittle.
I would recommend you recreate the job instead of doing direct restore from backup, even if you do not intend to ever run the job again. If you recreate the job and then rebuild the local database, you will be able to do multiple restores more quickly.
The ‘direct restore from files’ option only lets you do one restore at a time and it must recreate a partial database each time. If you are only doing one restore, it might be fine, but if your intent is to restore everything I would just redo the job config.
It’s in the Duplicati installation folder, possibly /usr/lib/duplicati. On systems with mono, sometimes you need to invoke it as mono and then the .exe file name. Other systems may mono automatically for you.
The suggestion from @drwtsn32 would get you the most “normal” environment, and would be an easy way to do small restores for a starter, or to see error messages, but I’m not certain it will succeed better than direct restore did. One thing it does allow is you can move your old DB, to ensure no bad leftovers even though I’m pretty sure direct restore will start a fresh temporary DB next run (could take awhile…).
Did you notice if it seemed to take a long time on DB last time? If so, I suggest you install latest Canary which if you’re lucky will just read the dindex file, be happy, and not mistakenly keep looking for missing data. Previously there was a bug where an empty source file’s data would be sought but never found…
The recovery tool is quite different and doesn’t use the dindex files IIRC. I think it just reads the dlist for the file and folder names and content pointers by hash to dblock data, then goes to get the dblock data after decrypting the .zip.aes file. Each block of file data or metadata is a file in that .zip, named by hash.