I have a serious situation in which I’m trying to restore files from a backup. The original computer was destroyed by fire. The backup was going to the box back in and I have attempted to restore through the back end as well as I have downloaded all the duplicati files to a local folder and tried both ways. Regardless of what I do it comes back that there are 40 missing files and then it just stops and nothing ever gets restored. I tried doing a repair and I got the same message 40 missing files but it never repaired. I desperately need to try to recover at least hopefully some of the most recent backup files and I have tried to use the duplicati restore recovery tool to no avail. If my files are downloaded to a local folder call duplicati what is the actual command line that I would need to force it to restore regardless of the fact that there might be missing files which I still don’t even know how that could have happened there’s no evidence that any duplicati files were ever deleted out of the box account. Basically what command do I need to run to tell it to restore whatever it can regardless of whether something was missing or not
Another user @mdwyer had a similar sounding issue in his thread from the other day, found here. It looks like he was eventually able to work out a solution though it looks like it was surprisingly convoluted to accomplish.
I’m still waiting on the devs (paging @kenkendk) to weigh in on his issue - especially when I went to test doing a “headless” restore from my own backup files as found on Google Drive, I had a nearly identical issue. My suspicion is that Duplicati is making some sort of false assumptions about files contained in the backup along the way for some reason, but I franky don’t know enough about the internals to make a judgment.
Depending on what files are reported as missing from the destination a repair may not be able to do anything. Basically, missing files of:
- dlist (list of files included in each backup set) can be fixed by repair
- dindex (list of dblock files) can be fixed by repair
- dblock (actual file block data) can NOT be fixed by repair
Normally Duplicati should be able to restore even if the backend is missing files - it will just write zeros where the missing data would have gone. Can you tell us how you’re trying the restore (from the job, “Direct restore from backup files”, or “Restore from configuration” and via GUI or command line) along with the actual error you’re getting?
OK - I did some testing and I think these are your options:
purge-broken-filescommand to remove references to the broken files. This is the “right” way to do it, but using the GUI based Command-line can be confusing.
Figure out which files are missing (should be in your logs) and manually remove (like temporarily move into a subfolder) related files then use the main menu “Restore” -> “Direct restore from backup files …” command.
Using a test backup I manually removed a
dblock file and got an error like yours (but only 1 file was missing) no matter how I tried to restore. Removing the
dlist file with the same date as the “missing”
dblock file still errored when using the job’s Restore command, but worked with using “Direct restore from backup files”.
This is because the “Direct restore…” command uses the
dlist files to know what backups to list in the Restore version selector rather than the local database. You should be able to restore what is restorable at this point.
at what ACTUAL dlist files exist in the destination, finds related dlist
I’m still trying to get my modest (~5gb) Google Drive backup to work doing a “headless” restore, i.e. “restore directly from backup files”, and have no idea how to do any of the things you’re saying here. One particular dblock file is identified as “missing” and that kills the whole restore attempt. I’ve verified that no dblock file with that name exists in the backup file set. There’s also no way for me to know which index file matches the date of the missing dblock, because my dblock file actually is “missing” (i’m guessing it was pruned out at some point due to version thinning or compacting), and therefore I didn’t have the opportunity to check the datestamp before deleting it.
I have no idea how to figure out which index file is trying to refer to it. I have no idea how I would do either a repair or a “purge broken files” on it, since in this scenario all I have is the backup files and no configured backup job (I do, but if I were in a crash situation, then I wouldn’t).
To me this represents broken functionality - i’d hope that it should be easy to fix (though i have no way of knowing for sure), since without the ability to do a “direct restore” then a backup program uses almost all usefulness in any “disaster” scenario. This is especially scary considering that nothing at all seems wrong with the configured backup job, “verify” works fine and so does anything else I’ve tried, so I would have no way of knowing if any of my other backup jobs currently have this vulnerability or not.
Sorry - I guess I misunderstood what you meant by “headless” (I thought you meant a machine with no monitor attached). So just to make sure I’m following correctly now, the scenario you’re dealing with (or simulating) is:
- you have ~5gb of source data backed up to Google Drive
- you no longer have the backup job or .sqlite file associated with that job
- you DO have access to a Duplicati web interface (such as on port 8200) on a machine with access to the Google Drive account to which the backups were made
- when you try to do a “Direct restore from backup files” you can browse to specific versions and file lists but when trying to restore anything the job fails because it reports a missing
dlistfile (likely during the “Building partial temporary database…” step)
I do have access to it, but I’m pretending I don’t (i.e. simulating a total loss situation like the original poster here and in several other similar threads recently). All other things you describe are accurate as-is: the “direct restore” brings up the file list, I select one SMALL file from somewhere not too deep into the folder structure and attempt to restore it as a new copy to my desktop, and the restore utterly fails. I’m not fast enough to catch exactly what step it fails at, and the log files aren’t much good for helping to figure this out either.
For further reference: I’ve never deleted any of the backup set dblock files, though I have changed the volume size once and run a compact operation, and more recently I’ve enabled the new retention policy settings to prune old versions automatically. I have no insight into what exactly tme missing dblock file is.
Thanks for the confirmation.
I’ll have to double check tomorrow but in me testing the scenario described above I recall finding the date of the missing file(s) in at least one place in the GUI.
That being said, it’s still not a good user experience so even if I end up being able to tell you how to “get around it” I agree it should be handled differently.
For example, maybe it should do one of these:
- prompt about missing files(s) and ask for a continuation configuration
- have a checkbox to “continue even if missing files”
- just do it but log what (and how many blocks of) requested restored files were affected
I should clarify further, in case this tidbit has gotten lost, that the “missing file” is not actually used at all in the current backup version as far as I can tell. That is, if I do a restore and select to restore from the configured backup set, I can select the same backup version and the same file and the restore works fine with no warning. Backups run as normal with no warning, and “verify files” runs fine with no warning. So my impression is that the “missing” dblock file is being referenced only from an orphaned index file which wasn’t deleted properly during some cleanup operation - my expectation would be that the restore functionality should be able to handle this pretty easily given that the “missing” dblock file isn’t actually used anymore.
In the case of a job-with-database restore this isn’t an issue as Duplicati can look at the database to know what blocks are stored where.
When restoring directly from the backup destination Duplicati has no reference for what’s where so uses the
dlist files to populate the list of what files are backed up in what versions.
Once files to be restored have been selected, I believe the
dindex files are then downloaded to build the temporary database that is used to figure out which
dblock files need to be downloaded and processed to do the actual restore.
It’s possible an apparently unrelated dblock file (say one older than the file you’re trying to restore) actually contains a matching block for the “newer” file. Because of deduplication, Duplicati would have just referenced that block (in the “old” dblock) rather than uploading it again. Similarly, compacting could cause archives to be merged in a way that could make wildly differing versions share dblocks files.
However, I think it’s more likely that the “Direct from backup files” restore process is using a generic piece of code that assumes we always want to verify things, when in this case we may not really want to.
Can you try your Direct restore again but this time on step 2 (Encryption) include the following in the “Advanced Options” field (the 2nd line just says to ONLY use the destination data rather than trying to use local file parts if available and unchanged since the version being restored)?
Another possibility I’ve tested on direct restore with a missing
dblock file is that main menu “About” -> “Show log” -> “Live” option gives me the following error:
Feb 6, 2018 11:51 AM: Missing file: duplicati-b3e516a51e3104b92b5353962ff339c3d.dblock.zip
If I steal an idea from @Mat and create an empty file with that name at the destination, then my “Direct restore…” backup runs without the missing file error. Of course if the file I’m trying to restore happened to be in that file it would be only partially restored, but at least it’s a start.
Granted neither of these are a fix or solution as they can expose other failure points (see error blow), but at least they allow people to move beyond an error at which they might previously have been getting stuck.
Errors: [ Failed to patch with remote file: “duplicati-b3e516a51e3104b92b5353962ff339c3d.dblock.zip”, message: The file duplicati-b3e516a51e3104b92b5353962ff339c3d.dblock.zip was downloaded and had size 0 but the size was expected to be 16735997 => The file duplicati-b3e516a51e3104b92b5353962ff339c3d.dblock.zip was downloaded and had size 0 but the size was expected to be 16735997 ]
Good call. The restore operation still threw the same error message, but proceeded to run the database rebuild and completed the restore attempt, and did in fact restore the file I’d selected.
I then tried it a second time, with just the --no-backend-verification=true option set, to see what would happen differently (if anything), and it worked the same way.
I’m under the assumption that the “direct restore” procedure needs to somehow account for this – I wouldn’t be surprised if this were the root of the ~4 separate recent forum threads about users unable to perform a restore after a total loss (including this thread).
Agreed. I was waiting on your verification that this works to then ask @kenkendk if there’s a way to have tasks like the “Direct restore from backup files” default certain parameters (like
--no-backend-verification=true) to simplify the end user experience when this happens.
I should clarify that the
--no-local-blocks=true parameter should probably ONLY be used if:
- you are testing your backup and want to confirm it contains everything needed for a restore
- you suspect local disk corruption so don’t want to use data from local files to help speed up a restore.
Did you ever get your restore working?
If not, please try adding the
--no-backend-verification=true parameter to the “Advanced Options” field on step 2 (Encryption) of the “Direct restore from backup files” process.