I am aware that some users report poor (read: ridiculous) performance for recreating the database. The performance depends on multiple factors, such as disk type, number of files etc.
I have it as a priority item, but there are only so many hours in a day …
Should you need to get the files out, it is possible to restore data without building the database. There is a tool bundled with Duplicati called Duplicati.CommandLine.RecoveryTool.exe which will do it for you. There is also a Python implementation of it, should you want to restore without Mono installed.
All the errors mentions ThreadAbortException which usually happens if you force abort the job. I assume this is what you did.
Can you try to do the database recreate from the commandline?
This way it is easier to diagnose what goes wrong. You just need to make sure that you point to a non-existing file with --dbpath and Duplicati will build the database in that file.
Processing all of the 3536 volumes for blocklists: duplicati-b000a9e62beba4cda9f9de9b4c063b477.dblock.zip.aes, duplicati-b004f9199370a45fe8705d2a1c579c787.dblock.zip.aes
etc.
etc. for a bunch of simular lines, then:
It’s still logging, it appears to download a file every 5 mins or so, CPU at 100%.
Above it said “Processing all of the 3536 volumes for blocklists”, does that mean it is downloading 3536 files, each at ~50MB each??? That’s about 176GB, the whole backup.
Yes, it is now downloading everything, because it is missing some information that it expected to find in the dindex files, but failed to find for some reason. Since there is no way of knowing which of the dblock files has that information, it just keeps running through them until it finds what it needs.
I figured that’s what it must be doing. Why does it take such a long time to download and process the files? Its going to take 12 days to download everything at its current speed.