Very slow database recreation

About → System information can confirm that. Is the backup old? Older Duplicati had more problems.
Also a working database (which you used to have) might unfortunately mask issues at the destination.

Any recollection of its error messages or other symptoms?

Ideally it should get to 70% and be done. 90% to 100% is last-ditch search for data blocks that weren’t recorded in seen dindex files. Duplicati doesn’t know what dblocks they live in without reading them all.

I think the final-10% search goes all the way, without option to stop, and without periodic status checks. Clearly there is room for improvement, though it might sacrifice other things such as getting data back.

Sometimes it gets to the end and is still missing data, due to backup corruption or some other problem. Recording error messages at that time is important, because we don’t want to repeat any lengthy runs.

People who don’t care much about old file versions find it faster to start again. People who do care may attempt to speed things up by adding resources, e.g. using fast parallel downloader to a local drive that Duplicati can read from for Recreate (the backup files are portable), adding ramdisks, SSDs, and so on.

People who just want their data back have other options for restore that don’t need usual database build.

Manual efforts to kill the Recreate and take some losses have not always worked, but it could be tried…

blocksize defaults to 100 KB, which might be slightly low for that size backup. Raising it may speed up Recreate in early section that ran quickly, but won’t help in the abnormal search-everything you’re in…

Choosing sizes in Duplicati

Regardless, blocksize can’t be changed on an existing backup, but what direction would you like to try?