Restore Encryption file i had forget Passphrase, so i need to restore the file without passphrase, Is any way to restore it. Backup has taken 2 year before now i try to restore the file it has PassPhrase , but i had forget it.
Can you describe your situation further? Do you have a total loss of original system drive data, and only files at some backup destination, with filenames ending in .zip.aes? And do you have no configuration export to a file or for command line or .bat file or other script? Is anything but encrypted files available?
It sounds kind of like Restoring files if your Duplicati installation is lost which does have some bugs, but āDirect restoreā should work, provided you know the needed information (or can get it out of something).
AES Crypt is probably a faster more user-friendly password-guess tool if you have to resort to guessing. Sometimes people reuse passwords. Not good for security, but maybe one you use elsewhere will work. Would there be anyone else who used the system who would be willing to help guess at the password?
What Duplicati version are you trying a restore with?
I have a similar situation - I canāt seem to get past the passphrase prompt.
Scenario: Duplicati on docker image atop unraid, machine-to-machine backup via local network. Both machines configured as similarly as possible backing up their files to each other. One complete backup set.
A year later too many of the machines (call it machine A) drives went bad totally bad - too many bad sectors, hence the duplicati config on it is not readable. So, attempting a direct restore of Aās files from backup on the working machine (call it machine B), using the same setup on machine B.
It doesnāt seem to like my passphrase, so I guess either its passphrase was different, or something else is wrong.
What I have available to me:
What I THINK is the passphrase.
the contents of the backup folder, including its .ssh folder in the backup destination which has the authorized_keys file - (which I donāt understand the contents of but I am not sure that it can be used if the encryption is one-way only?)
The SQLite install files and configuration of machine B - which may be similar.
I did poke around at a sqlite file on B and saw that thereās a configuration table that has the following: perhaps it can be used as a way to back it out?
blocksize
102400
blockhash
SHA256
filehash
SHA256
passphrase-salt
v1:EB422ā¦
passphrase
65F1936ā¦
My illogical? hope is that someone can tell me a way to somehow get my passphrase out of the above, or get around it somehow, so that I can restore my fatherās files.
@Zach_Baker are you talking about the web UI password prompt? If so, stop Duplicati and start it with this additional command line option:
--webservice-password=
It will clear the password from the database from my limited testing. Remove that command line option after itās been cleared so that you can set a new password if desired.
If you are talking about the backup encryption password, and you have access to Duplicati-server.sqlite, you should be able to see the password in the āOptionā table. Find the value for āpassphraseā where the BackupID >= 1. The BackupID number corresponds to the backup jobs defined in the main UI.
āIf you are talking about the backup encryption password, and you have access to Duplicati-server.sqlite, you should be able to see the password in the āOptionā table. Find the value for āpassphraseā where the BackupID >= 1ā
It seems that I used the same passphrase between the two setups. Yeay! Thanks so much, this is going to mean a lot to someone to get their files back.
I recommend recording that password somewhere secure. If you lose your encryption password AND lose the Duplicati-server.sqlite file, your backups will forever be unrecoverable.
It might be ānormalā if Duplicati is also trying to recreate the local database, and you are using a version that has a flaw with that process under certain circumstances.
Do you know if itās trying to recreate the database? Go to About / Show Log / Live / Information and copy/paste some the recent log entries.
Well, since it seemed to be making no progress, I left it running for 5 days while out of town. And it seems we have a partial success. Documents folder recovered, but pictures folder/backup not.
2.0.4.23_beta_2019-07-14 Is the version Iām using.
I am wondering if there is something I can do to have it attempt once more to rebuild the local database or something, and maybe just focus on the folders that are empty after restore (which were there, at least top-level, I didnāt look before trying the initial restore, I told it to restore everything)ā¦thoughts?
Thanks. Log below
The log is pretty slim:
Nov 26, 2019 4:52 PM: Failed while executing āListRemoteā with id: 2
Nov 26, 2019 4:50 PM: Error in worker
Nov 26, 2019 4:50 PM: Failed while executing āBackupā with id: 2
Nov 26, 2019 9:18 AM: Failed while executing āRepairUpdateā with id: 5a98eb1a-f5ef-4b84-b2ff-29236eee241e
Nov 26, 2019 9:18 AM: Failed while executing āRestoreā with id: b823ccff-ec11-44a5-a6e4-32356530652c
Are you sure thatās About / Show Log / Live / Information (last one needs picking a level on dropdown)?
It looks more like About / Show log (and stop) which shows the Stored log from the server, but one nice thing about that one is you can click on an item and often get some detailed information about the item.
What was expected was like the below, except with more downloads and maybe also with some errors:
Dec 2, 2019 7:00 PM: Backend event: Get - Completed: duplicati-b40d8871c0ef14c82b55ba40d04d4bbd4.dblock.zip (1,002.24 KB)
Dec 2, 2019 7:00 PM: Backend event: Get - Started: duplicati-b40d8871c0ef14c82b55ba40d04d4bbd4.dblock.zip (1,002.24 KB)
Dec 2, 2019 7:00 PM: 18 remote files are required to restore
Dec 2, 2019 7:00 PM: Searching backup 0 (11/13/2019 12:01:37 PM) ...
Dec 2, 2019 7:00 PM: Backend event: List - Completed: (109 bytes)
Dec 2, 2019 7:00 PM: Backend event: List - Started: ()
Dec 2, 2019 7:00 PM: The operation Restore has started
You can set slightly higher (Retry) or hugely higher (Profiling) levels to increase the amount of logging. Seeing the files downloading is a good companion to āDownloading filesā status, to see actual actions.
Assuming you ran the Restore from the backup job, there may also be a job log under jobās Show log, which would give you additional information. Did restore fail, finish, or was it still running after 5 days?
Does āonce moreā mean you tried before, e.g. with Recreate button or (maybe safer) manual rename then Repair button? If you ever do that, and it gets to the 70% - 100% range on progress, and is very slow at that point, and is downloading dblock files (see live log), then thereās a good chance the speed issue can be solved by doing it with a release with the fix, e.g. 2.0.4.34 Canary, however downgrading onto 2.0.4.23 Beta wonāt be possible. You can change Settings to Beta channel, then wait for next one.
For a limited database rebuild, you can also try direct restore to make a partial temporary database for what you requested restored. Itās possible (but not certain) that it will avoid the 70% - 100% slowdown.
If this doesnāt help, then the next thing to try is probably Duplicati.CommandLine.RecoveryTool.exe as Recovering by using the Duplicati Recovery tool explains The URL it takes is probably similar to what Export As Command-line gives for a URL. Youāll need enough free space to hold all the backupās files, plus space for the restored files. All files restore is default, but it looks like it can trimmed if itās needed.
Following your directions on the log, thereās actually only two lines:
Dec 2, 2019 8:39 AM: The operation Repair has completed
Dec 2, 2019 8:39 AM: Recreate/path-update completed, not running consistency checks
Looks like the āLiveā window has passed. Probably due to an update of Duplicati.
Digging further on the logs in the stored view, I see nothing that doesnāt probably line up with me killing it when it seemed nothing was getting done fast, which was before I understood the bug in the system that can delay the restore process. Even these two entries probably line up with me killing it to start it over:
Nov 26, 2019 4:52 PM: Failed while executing āListRemoteā with id: 2
System.Net.Sockets.SocketException (0x80004005): No route to host
at Duplicati.Library.Main.BackendManager.List () [0x00049] in :0
at Duplicati.Library.Main.Controller.b__22_0 (Duplicati.Library.Main.ListRemoteResults result) [0x00055] in :0
Nov 26, 2019 4:50 PM: Error in worker
System.Threading.ThreadAbortException: Thread was being aborted.
at (wrapper managed-to-native)
There are no job logs, as the only scenario I can work with here is a direct restore. And since it is only a direct restore option, I donāt know how to answer your āonce moreā question as I certainly donāt see any ārecreateā or ārepairā options.
My question is simply if I am starting from a direct restore, what local databases do I need to kill so that it truly starts from fresh, in case something was goofed the first time I did the direct restore, when pointing to this folder content:
No DB kill needed AFAIK. Direct restore will show āBuilding partial temporary databaseā, and DB is not saved for future uses. If you wanted it saved (only relevant for repeated repeated restores or backups), Database management for the job shows the buttons where you could Recreate or rename and Repair.
Live log would work either way, job log is in the job DB (so doesnāt persist long for direct restore), and a log that you create for yourself to a text file will work either way (though where you set it up is different).
Why is that? I know there was some weirdness with the encryption password, but the direct restore has the same encryption and destination information as a regular job, so should be enough to recreate your original job for the restore if it got lost. Just donāt have two jobs actually back up to the same destination. A direct restore is safer in that way, because it can never back up, so can never damage the destination.
And the look at the destination screenshot adds a new angle, as by default dblock files should be 50MB and numerous so the Restore can download just the ones it needs. There are no partial-file downloads. Choosing sizes in Duplicati talks about this, but your 2018 backup might be worded differently than this:
Though Iām still curious what the error messages would say, thereās less that can be done to resolve issues when the entire backup is basically a single data file, so maybe itād be easiest to just attempt Recovering by using the Duplicati Recovery tool to see what it makes. Itās more tolerant of problems, however itās probably worse at detailed warnings and errors (which previous effort was trying to get).
Alternatives:
Get the backup job fixed, trying not to lose any data, e.g. save or rename DB before rebuilding.
Direct restore again. Might be pointless except for possible ability to see or log error messages.
Duplicati.CommandLine.RecoveryTool.exe which is a less particular tool which might get more.
but the direct restore has the same encryption and destination information as a regular job, so should be enough to recreate your original job for the restore if it got lost.
That is good to know, but honestly I my concern is file recovery and I donāt care about restoring the original job. Duplicati turned out to be a poor site-to-site solution for two UnRaid setups due to the challenges of NAT that Duplicati doesnāt try to solve - that is the killer feature we needed and the FTP solutions we came up with were too brittle.
I would recommend you recreate the job instead of doing direct restore from backup, even if you do not intend to ever run the job again. If you recreate the job and then rebuild the local database, you will be able to do multiple restores more quickly.
The ādirect restore from filesā option only lets you do one restore at a time and it must recreate a partial database each time. If you are only doing one restore, it might be fine, but if your intent is to restore everything I would just redo the job config.
Itās in the Duplicati installation folder, possibly /usr/lib/duplicati. On systems with mono, sometimes you need to invoke it as mono and then the .exe file name. Other systems may mono automatically for you.
The suggestion from @drwtsn32 would get you the most ānormalā environment, and would be an easy way to do small restores for a starter, or to see error messages, but Iām not certain it will succeed better than direct restore did. One thing it does allow is you can move your old DB, to ensure no bad leftovers even though Iām pretty sure direct restore will start a fresh temporary DB next run (could take awhileā¦).
Did you notice if it seemed to take a long time on DB last time? If so, I suggest you install latest Canary which if youāre lucky will just read the dindex file, be happy, and not mistakenly keep looking for missing data. Previously there was a bug where an empty source fileās data would be sought but never foundā¦
The recovery tool is quite different and doesnāt use the dindex files IIRC. I think it just reads the dlist for the file and folder names and content pointers by hash to dblock data, then goes to get the dblock data after decrypting the .zip.aes file. Each block of file data or metadata is a file in that .zip, named by hash.