Verifying every file default? Can I cancel

Using version I ran a restore from cloud source on a different machine, default options. I thought I had imported the config, but it is not showing on the console. A temporary database was created, and the restored files look good. From what I can tell, It appears Duplicati is verifying every file. Is this a default setting? Can I cancel “verifying restored files” and not have to start over?

I don’t know if cancel works there. It’s more for backup. But verifying files is nice for safety, is it not?
It is reading the restored files back to make sure they contain what was originally seen. Sound OK?

Its 350 GB of data, took 15 hours to download. No download activity, just appears to be verifying every file. I can see disk activity on all the files, and it read. Nothing new has been written for 5 hours now. Yes it is good, but I don’t have the time to spare. I did not manually enable verify or verify samples, it must be default setting to restore from cloud source?

I don’t think cloud has anything to do with it, but default is verify. If you’re determined not to, set


After restoring files, the file hash of all restored files are checked to verify that the restore was successful.
Use this option to disable the check and avoid waiting for the verification.

and what you miss looks like the below:

If that means that this was a Direct restore from backup files, that has a partial temporary database, meaning I’ll worry less about DB health if you stop Duplicati rudely, maybe even doing a process kill.

You should test using the Stop button first to see if it does anything for this situation, as I’m not sure, although there’s some tempting looking code checking TaskControlState.Stop, so maybe it will work.

Although it looks like you did not try to import inside Add backup, that database is a permanent one. Because I see some possible database work in the cited code, I’d worry more about making a mess.

Thank you for the fast and comprehensive support, greatly appreciated.
Since it is the default, and only checking hashes I will let it run a little longer, but afraid it may take just as much time for verify? Thoughts? 1/2 the time ? No progress bar but I can get a feel for the files remaining looking at the read progress…Update…
Although it says verifying, it appears files are still downloading? Or is this updating index or other files related to the temp database. I don’t want to stop if still downloading files…
… would be nice to post screen shot here…


Yea! It just finished. Thank you.

Line 458 gets a list of files. I don’t know if there’s a particular order, but line 468 will announce files.
About → Show log → Live → Verbose if you wish, but I don’t see a direct way to check processing.
To see if it’s downloading, you need just Information level, but I don’t know why it would be doing it.
The verification is the very last part of the restore. I think files are all there before verification starts.

@RickkeeC actually you may have hit a database speed problem rather than a verify problem. You can check that with a database explorer to look at the block table size (if it’s bigger than 1 Million you can expect problems, the more so when it grows further, that is, with 2 Millions blocks it’s much worse than 2 times the problems at 1 Million). It’s a long standing issue.
With Duplicati at the version you are using, the only workaround is to raise the block size (not the dblock size !). Unfortunately this requires to recreate the backup. With a 350 GB backup, I think that a 5 times increase on the block size (from 100 Kb to 500 Kb) could be enough to help some.

Assuming the original post gave the status bar message, it leads to the above-cited code chunk.

I’d agree with that, certainly and generally. There’s SQL everywhere. It sometimes scales poorly.

If this backup is ever re-done, increasing blocksize would be a good idea. For post restore verify
I would think that it really only needs to read the files and check each hash against the database.

Even though the Blockset for a file has its own hash (so no need to tangle with a big Block table),
there’s always a chance that some more ambitious code used Block, thus added to total run time.

On the other hand though…

If this is a mechanical drive (is it?), it’s reading at around 20 MB per second, which is not so slow.
If it happens to be an SSD, I’d be wondering what is was up to for 5 hours. Try performance tools.