Backup either stuck or incredibly slow after resume from hibernate

Windows 11 Pro. I had a hard drive failure and so I’m attempting to restore my entire backup folder from that drive. After resuming from sleep, it says its “Restoring: 2602 files (217.80 MB) to go at 1.05 KB/s”. However, even at that very slow speed, it doesn’t seem to be counting down the MB. After a few minutes even at 1 KB/s, I should see that .80 turn into a .70, but I’m not seeing that. I am seeing the speed update in a downward direction every once in a while.

In any case, I don’t know what to do here other than scrap the whole thing and try to restore from scratch. I read some references to people stopping the restore and starting a new one in the same place, but in the restore options, the only two options for existing files is “overwrite” or “save different version” - there’s no “skip”/”ignore” option, which makes me think this is no better than just restoring from scratch to a different location.

If I were going to restart the restore, I would also like to have some confirmation about which backup state I restored from specifically, since it wasn’t the most recent one (the most recent two backup states are very incomplete, which itself is worrying: how do I know the ones that superficially look more complete are actually complete?)

You probably want overwrite because you aren’t trying to preserve different versions.

Restore is not a file copy. Files are changed as they need changing. It’s more efficient.

The old restore flow describes Stable or old Beta. New restore flow announces current.

Other large changes

Timeouts logic on all backend operations

might also help, especially for remote destinations more prone to odd network glitches.
Throwing resume from hibernate into the mix confuses issues. Maybe avoid if possible.

It restores from the one you selected on the list of available restores on the dropdown. Question isn’t very clear. If you selected the wrong one, select the right one, then files restore as necessary to make sure you get the result you asked for, plus maybe some leftover files from the mistaken choice. If leftovers bother you, you can start over clean.

Files are changed as they need changing. It’s more efficient.

You mean restored files are not overwritten but skipped if they’re the same exact file?

It restores from the one you selected on the list of available restores on the dropdown.

Yes, but shouldn’t the backup log display which one I selected so I can double check?

If you selected the wrong one, select the right one

The whole point is that I want to make sure I’m selecting the right one.

In any case, I believe it actually finally finished, but with errors. Its really hard for me to trust these backup. I’m trying to figure out ways to verify that the files restored are actually the full backup.

Yes. You can follow the links I posted for more details, for example:

Prepare the list of files to restore the list of blocks each file needs.

There’s no point in taking potentially large efforts to overwrite files with exact same data.

Maybe, but there are almost no places now where an operation does such double check.
Deleting all files on the destination is one of them, as nothing else is quite as destructive.

Read and click carefully, sanity-check your results, for example look over file timestamps.

Although it’s not a beforehand check, job log Complete log can show what you did, like:

  "Messages": [
    "2025-09-08 17:49:05 -04 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Restore has started",
    "2025-09-08 17:49:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2025-09-08 17:49:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (9 bytes)",
    "2025-09-08 17:49:05 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: QuotaInfo - Started:  ()",
    "2025-09-08 17:49:06 -04 - [Information-Duplicati.Library.Main.Database.LocalRestoreDatabase-SearchingBackup]: Searching backup 1 (9/7/2025 11:38:01 AM) ...",

Can’t comment without some details. Whenever you get warnings or errors, please note.
Your job log might also preserve them, or the server log at About → Show log → Stored.

Duplicati verifies this at the end, may give warnings or errors, and they might give a clue.

If you restore to a specific empty folder initially, you can also count the files with Explorer.
You could try matching that against the RestoredFiles count in the job Complete log.

You can browse to see if everything you think should be there is there.

Beyond this, there is much data in files on Destination, and job database, if you have one.

How did you do the restore? Did you do “Direct restore from backup files”? If so, database notes that it is partial temporary. If you want a permanent database, try to recreate the job.

Recreate Backup Task describes how, if you didn’t have job Export, or did you Import one?

Regardless, once you get at least a slightly sketched in job, do a Database screen Repair.

GUI Commandline list without any commandline arguments will show version files, like:

Listing filesets: 
 0	: 9/7/2025 5:14:18 PM (3 files, 41 bytes) 
 1	: 9/7/2025 7:38:01 AM (3 files, 41 bytes) 
 2	: 9/6/2025 3:21:43 PM (3 files, 41 bytes) 
 3	: 9/3/2025 8:35:59 AM (3 files, 41 bytes) 
 4	: 9/3/2025 8:32:39 AM (2 files, 2 bytes) 
 Return code: 0 

You can get all the paths by using a wildcard such as a *, with version= for wanted one.
You can do the same thing using an OS command line if you want to capture paths in file.

If you’re very ambitious and technical, the dlist file even has the SHA-256 sums of files which you could verify. This is what Duplicati does. The older flow of doing this looks like:

but if you see warnings and errors, it’s important to look at them to see what went wrong.

Indeed. But when the option says “overwrite” I expect it to … overwrite. The UI should be clearer about what’s going on.

Duplicati verifies this at the end, may give warnings or errors, and they might give a clue.

It did give me both errors and warnings. Good clues, but not very confidence inspiring. And tbh I don’t really trust this restore even with the warnings, so I want some other ways for me to verify the backup.

You could try matching that against the RestoredFiles count in the job Complete log.

I did this. It doesn’t match… The number of files it says it restored is about 140 less than it seems that it actually restored. Why this would be is mysterious to me.

You can browse to see if everything you think should be there is there.

This is untenable for over 600,000 files. Plus I’ve already seen evidence that duplicati can write corrupt files to the restore location, so even if I verify all 600,000 files, I still don’t know the restore is good.

How did you do the restore?

I went to the backup and selected “restore files”.

My current most pressing problem is this:

Duplicati.Library.Main.Operation.FilelistProcessor-MissingRemoteHash]: remote file duplicati-b50688453e5d24dd7810fc828394df504.dblock.zip.aes is listed as Verified with size 1048576 but should be 52421181, please verify the sha256 hash “mfq32T01sdofr6xC6KCFXfsnXeRjbrWY7snym4yg3z4=”

No idea why I have corrupted dblocks in here. It “successfully” restored 0 files of 22 files. What can I do about this?

No clues for anybody if you don’t interpret or post them. Any recollection? Any logs?

After loss of drive, there is no backup job to go to. Did you do job import or recreate?

If you have no jobs, you have the below, and no permanent database for log to be in.

You may have a corrupted dblock there. What is Destination type? Can you verify size?

At default 50 MB Remote volume size (if you left at default or can check others), size of 52421181 is more plausible. In addition, 1048576 is 0x100000 and any binary-even size raises suspicion because it’s not likely to happen naturally. Maybe filesystem or network.

Is there an actual message or screenshot? Your numbers were also far higher earlier.
You’re saying you have 600,000 files and it claimed 140 less than it actually restored.

It cannot write complete files if data is lost, for example if a dblock file became corrupted. Various warning and error messages would appear, which is why I keep asking for them.
If you ignore the warning and error messages, it is definitely possible to have wrong files.

I don’t follow this logic at all. If you verify a file and it’s good, then it’s good. If bad, it’s bad. Verifying files outside of Duplicati is tough, but I don’t see logic that status is unknowable.

If you have only one bad dblock, maybe it affects not very many source files. You can run
The AFFECTED command to see what source files were affected. Below that are shown list-broken-files and purge-broken-files. Recovering by purging files covers use.

GUI Commandline for the job (if you have a job) is probably an easier place than true CLI.

Or maybe you will figure out whether the wrong size is an illusion. Some destination types such as SMB can sometimes give wrong sizes for unknown reasons, direct from the OS…

What is Destination type? Is it accessible outside of Duplicati? What is Duplicati version?

This would certainly be easier if you can get a non-corrupted set of files from Destination. While checking suspicious file size, you can also check date to see if it changed recently. Duplicati checks file sizes of all Destination files often, but some destinations work poorly. Most guaranteed way of testing if file is damaged is to unzip (if a zip) or decrypt (if .aes). AES Crypt is a third-party GUI tool, and Duplicati ships SharpAESCrypt which runs in CLI.

No clues for anybody if you don’t interpret or post them. Any recollection? Any logs?

{
  "RestoredFiles": 606169,
  "SizeOfRestoredFiles": 415573583119,
  "RestoredFolders": 88816,
  "RestoredSymlinks": 0,
  "PatchedFiles": 0,
  "DeletedFiles": 0,
  "DeletedFolders": 0,
  "DeletedSymlinks": 0,
  "MainOperation": "Restore",
  "RecreateDatabaseResults": null,
  "ParsedResult": "Error",
  "Interrupted": false,
  "Version": "2.0.8.1 (2.0.8.1_beta_2024-05-07)",
  "EndTime": "2025-09-08T00:36:18.405138Z",
  "BeginTime": "2025-09-06T16:39:26.2294286Z",
  "Duration": "1.07:56:52.1757094",
  "MessagesActualLength": 15058,
  "WarningsActualLength": 18,
  "ErrorsActualLength": 179,
  "Messages": [
  ...
  ],
  "Warnings": [
    "2025-09-06 11:40:59 -05 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-MissingRemoteHash]: remote file duplicati-b50688453e5d24dd7810fc828394df504.dblock.zip.aes is listed as Verified with size 1048576 but should be 52421181, please verify the sha256 hash \"mfq32T01sdofr6xC6KCFXfsnXeRjbrWY7snym4yg3z4=\"",
    ... (two more like this ^)
    
    "2025-09-07 17:05:11 -05 - [Warning-Duplicati.Library.Main.Operation.RestoreHandler-MetadataWriteFailed]: Failed to apply metadata to file: \"D:\\backup restore 2025-09-06\\billysFile\\Graphical\\DIGITAL CAMERA\\2024-10-23\\Picks\\PXL_20241026_010322411.MP.jpg\", message: Could not find file '\\\\?\\D:\\backup restore 2025-09-06\\billysFile\\Graphical\\DIGITAL CAMERA\\2024-10-23\\Picks\\PXL_20241026_010322411.MP.jpg'.\r\nFileNotFoundException: Could not find file '\\\\?\\D:\\backup restore 2025-09-06\\billysFile\\Graphical\\DIGITAL CAMERA\\2024-10-23\\Picks\\PXL_20241026_010322411.MP.jpg'."
    ... (A bunch more like this ^)
    
  ],
  "Errors": [
    "2025-09-06 12:47:41 -05 - [Error-Duplicati.Library.Main.AsyncDownloader-FailedToRetrieveFile]: Failed to retrieve file duplicati-b4f228ccfe68540c1a505bc9b73dbf3f9.dblock.zip.aes\r\nCryptographicException: Failed to decrypt data (invalid passphrase?): Message has been altered, do not trust content",
    "2025-09-06 12:47:41 -05 - [Error-Duplicati.Library.Main.Operation.RestoreHandler-PatchingFailed]: Failed to patch with remote file: \"duplicati-b4f228ccfe68540c1a505bc9b73dbf3f9.dblock.zip.aes\", message: Failed to decrypt data (invalid passphrase?): Message has been altered, do not trust content\r\nCryptographicException: Failed to decrypt data (invalid passphrase?): Message has been altered, do not trust content",
    
    ... (one more of these ^)
    
    "2025-09-06 13:14:07 -05 - [Error-Duplicati.Library.Main.AsyncDownloader-FailedToRetrieveFile]: Failed to retrieve file duplicati-b4f2737dba4454f25bc8e9e3ff709204f.dblock.zip.aes\r\nCryptographicException: Invalid header marker",
    
    ... (one more of these ^)
    
    "2025-09-06 18:35:15 -05 - [Error-Duplicati.Library.Main.AsyncDownloader-FailedToRetrieveFile]: Failed to retrieve file duplicati-b852168eebb434eb5b32f7d1c0b2ccaec.dblock.zip.aes\r\nCryptographicException: File length is invalid",
    
    ... (one more of these ^)
    
    "2025-09-07 18:37:14 -05 - [Error-Duplicati.Library.Main.Operation.RestoreHandler-RestoreFileFailed]: Failed to restore file: \"D:\\backup restore 2025-09-06\\ok\\Classic Rock\\Top\\chill\\Beach Boys - California Dreamin'.mp3\". Error message was: Failed to restore file: \"D:\\backup restore 2025-09-06\\ok\\Classic Rock\\Top\\chill\\Beach Boys - California Dreamin'.mp3\". File hash is F3xohu6RNn1rND6TUDSXN75jVRPY0sbROXyQuBhnAzc=, expected hash is 4Tkgf9UzdnyxI9rkpXnAaZFAFH15Za+4h98spiE1ZZs=\r\nException: Failed to restore file: \"D:\\backup restore 2025-09-06\\ok\\Classic Rock\\Top\\chill\\Beach Boys - California Dreamin'.mp3\". File hash is F3xohu6RNn1rND6TUDSXN75jVRPY0sbROXyQuBhnAzc=, expected hash is 4Tkgf9UzdnyxI9rkpXnAaZFAFH15Za+4h98spiE1ZZs=",
    
    ... (a number more similar to this ^)
  ],
  "BackendStatistics": {
    "RemoteCalls": 7527,
    "BytesUploaded": 0,
    "BytesDownloaded": 389195677785,
    "FilesUploaded": 0,
    "FilesDownloaded": 7501,
    "FilesDeleted": 0,
    "FoldersCreated": 0,
    "RetryAttempts": 20,
    "UnknownFileSize": 1570,
    "UnknownFileCount": 1,
    "KnownFileCount": 15763,
    "KnownFileSize": 421077134544,
    "LastBackupDate": "2025-08-31T16:00:00-05:00",
    "BackupListCount": 483,
    "TotalQuotaSpace": 4000634109952,
    "FreeQuotaSpace": 452027482112,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Restore",
    "ParsedResult": "Success",
    "Interrupted": false,
    "Version": "2.0.8.1 (2.0.8.1_beta_2024-05-07)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2025-09-06T16:39:26.2294286Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}

After loss of drive, there is no backup job to go to. Did you do job import or recreate?

I had a backup job that was backing up my main external hard drive to another external hard drive. The main one failed. My laptop itself is completely fine. So I just went to the backup job in the UI and selected “restore files” and selected the two folders that came from my main external (omitting the one folder from my laptop harddrive, since I don’t need that restored).

What is Destination type? Can you verify size?

The destination (I assume this is where its backing up to) is an external hard drive. I’m not sure what you mean by “can you verify size”, size of what? If you mean the size of the lost files, not really. Hard drive is in too poor of a shape to count all the files. I know the drive in total has about 475GB so thats sorta in the ballpark of the 392GB the backup says it has. I also didn’t backup the whole drive, just the two main folders in that drive (any top level hidden files or other invisible stuff I didn’t backup).

Is there an actual message or screenshot?

The relevant messages are all above.

Your numbers were also far higher earlier

Yeah, after restoring the whole backup, I switched to attempting to store the individual files that I was given warnings and errors about.

If you verify a file and it’s good, then it’s good.

I meant that I can verify that the file is there and has the right size. Does duplicati provide a proceess for verifying the contents/hash of all the files? I’d love to run that verification process if possible. I believe I saw someone mentioned that this is run automatically after a restore, but a double check would give good peace of mind.

The AFFECTED command to see what source files were affected. Below that are shown list-broken-files and purge-broken-files

I will try that later

Most guaranteed way of testing if file is damaged is to unzip (if a zip) or decrypt (if .aes). AES Crypt is a third-party GUI tool, and Duplicati ships SharpAESCrypt which runs in CLI.

That’s a great idea. I will try that later as well.

Thank you for the help, btw!

The files on the external hard drive which is the Destination are having numerous issues, which is concerning because if they’re damaged then it means loss of some source data.

I asked that in response to below (shown truncated to size question. What is it really?

The log shows other issues which I’ll try to interpret, although the dev can do better.

“Verified with size 1048576 but should be 52421181, please verify the sha256 hash”
is caught at a length error (compared to database record), so asks content checking.
You’d use a tool to get a file hash, then Base64 encode it. A tall order, and needless.
Simply seeing the wrong length almost certainly means that hash will also be wrong.
I’m asking for a length check by some other inspection whenever this error happens.

“Failed to decrypt data (invalid passphrase?): Message has been altered, do not trust content” is a content check failure, like I was asking you to see if a decryption can run.
I’m not certain how various checks interact, but this one is also a sign of damaged file.

“CryptographicException: Invalid header marker” is another file damaged so badly that expected three characters AES at start of file aren’t there (or similar start-of-file issue). Sometimes one might find that the file is 0 length. If not, I guess you can look at inside, even with a simple editor such as notepad (is this Windows?) to see what’s at the start.

“File length is invalid” is probably another form of the first error. If it says lengths, check.

“PatchingFailed]: Failed to patch with remote file” due to “Message has been altered” is talking about applying individual blocks of data to a restored file, but block is unavailable.

“MetadataWriteFailed]: Failed to apply metadata” because of “Could not find file”
might be one of the final steps before verification, where timestamps (etc.) are restored.
This can’t work if file itself was not restored due to its content dblock data being missing.

“RestoreFileFailed]: Failed to restore file” complaining about “expected hash is” is likely usual hash-based integrity check I mentioned at end of restore flagging a partial restore.

So all of this seemingly stems from damaged files on the USB hard drive. Confirming that with length checks, decrypt attempts with other tools, etc. could confirm issues. Possibly OS reboot, safely removing and replugging USB, etc. could clear the problems with files?

Duplicati will protect source files by backup to destination, but destination harm is trouble, generally meaning some loss of source data, so confirming damage is an important step.

At this point you’re looking at destination drive, not inspecting restore giving known errors.

I said this many times above, quoted the steps, gave a link, and now cite a log example.

I don’t think it can be run by hand on demand, and even less so in the upcoming restore flow, where it checks file hash (mathematical fingerprint of the data) as it writes each file. Previous scheme used a patching-here-and-there approach, so it couldn’t do it that way.

First step is to figure out whether damage to destination is real and persistent, maybe by starting with reported bad files, and possibly progressing to a full survey for file damage.

What OS is this? Both AES tools I mentioned are cross-platform, but operation may vary. Looking inside the files is more thorough than doing superficial checks such as file length.

Opening it with notepad++ shows me what looks like almost 50 MB of NULs. Yeah, no header.

What OS is this?

Windows 11

The AFFECTED command

This was really helpful. Running this on all the dblocks I saw in warnings and errors gave me a ton more corrupted files. Very disconcerting that many of these files were not otherwise reported as having failed to properly restore and instead simply restored in a corrupted state. I was able to pull most of these from my ailing main hard drive (tho not all).

Possibly OS reboot, safely removing and replugging USB, etc. could clear the problems with files?

Will try. Would love for that to be the case, but feels unlikely to me.

Be sure to give the version you care about, or I think there will be lots of extra output.

One simple but unsatisfying possible explanation is that the job log message stats are:

and each section is limited to 20 lines I think, so you gladly didn’t see all the Messages, however would have liked to see all the Errors in this case. I think the popup might give counts, but that’s unsatisfying too. If you want to see everything, you can set up log-file.

I’m not sure I’d trust file copies from an ailing drive completely either.

Was there some mishap that might have damaged both the source and backup drives?
Does backup drive at least feel healthy enough for further usage? Probably hard to say.

The right way to do this is probably to scan for files that won’t decrypt, then move them somewhere else, then list-broken-files and purge-broken-files per the manual.

Recovering by purging files

After that, set up a log-file just in case, and see if Restore will restore what remains.
Saving both the moved files and the job database beforehand is good, also just in case.

If there’s a file that you’d really like back even if parts are missing, there are other tools.
RecoveryTool is one that I think tries hard to write files even if some blocks are missing. There are many things regular restore does better, and I think that’s best for most files.

When decrypting, first check for existing .zip files. There shouldn’t be any, only .aes.
Decrypting will make .zip that you can delete later. Make sure there’s enough space.
AES Crypt Command-Line Interface (CLI) can use wildcards. GUI can use multi-select.

Another approach is the test command to test all files. The basic test just checks the contents and other info against the database, but to get there it has to decrypt the files.

Ultimately it’s your data, but you wanted to verify that files that get restored are correct. Allowing Duplicati to do that is the easiest way if you buy the missing 159 errors theory.
159 is the 179 error lines minus the 20 that the job log captured. Use a log-file for more.

Maybe someday there will be a bigger error store, but there’s probably space worry too, ultimately including how big a log a receiver (which could be a phone alert) can tolerate.

EDIT 1:

The downside of only doing this the right way is that it will skip the damaged files where copying from the ailing drive is possibly better result than not having the files back at all.

Combination approach could possibly be used. The list-broken-files output will be similar to the affected output, except instead of typing files you move them elsewhere.

1 Like