Database recreate performance

Have you tested Exclude files from a Time Machine backup on Mac (macOS User Guide)?
Note Apple’s note about APFS local snapshots, but I see you’re talking about backup here.

That might be an SSD downgrade, although it might depend on what model you got.
I don’t know all the Apple model details, but you can search for yours. One example:

512GB version of the new MacBook Pro has a slower SSD than the Mac it replaces
“Apple is using fewer chips in M2 Macs to provide the same amount of storage.”

View disk activity in Activity Monitor on Mac sounds tricky to interpret, but any clues?
I’d prefer queue length or a load average including I/O wait. I’m not a macOS expert.

Duplicati effects on other things come from something, so keep on looking if you like.
You can also make Duplicati more polite, yielding to other demands but slowing itself.

use-background-io-priority would be the one for drive contention not yet investigated.
thread-priority may help CPU, however the breadth of the slowdown suggests it’s OK
except maybe for programs which can’t use multiple cores easily, such as Duplicati’s
SQLite database, meaning it could be going flat out on one core at 10% load over 10.

Big backups (e.g. when over 100 GB) such as some of yours need a larger blocksize,
because without that some of the SQL queries get really slow. You can watch them in
About → Show log → Live → Profiling if you like. Also see the @gpatel-fr comments.

That’s likely RAM resident. If so it wouldn’t notice an issue if the SSD was being slow.
Chrome and pretty much any browser will write the download to the drive as a cache.

Assuming that’s Duplicati Get, first make sure this is the Started to Completed time.

2022-10-17 16:52:32 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b570acf7705434570b871528c607064ff.dblock.zip.aes (45.16 MB)
2022-10-17 16:52:39 -04 - [Profiling-Duplicati.Library.Main.BackendManager-DownloadSpeed]: Downloaded and decrypted 45.16 MB in 00:00:06.6162567, 6.83 MB/s
2022-10-17 16:52:39 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b570acf7705434570b871528c607064ff.dblock.zip.aes (45.16 MB)

File does get written to the drive. Above “politeness” options can slow your download.
I’m not sure if any decryption slowness (per middle line) will delay the Completed line.

If you want to try a more pure download test, Export As Command-line and run get in
Duplicati.CommandLine.BackendTool.exe and see if that also takes a minute to finish.

Any breakage needs a detailed description. To just say it “corrupted” leads us nowhere.
Feel free to cite previous topics if you think we’ve beaten against some of those cases.
I’m pretty sure I haven’t been in one involving the cat, but that’s especially odd, since a
source interruption (while maybe risky to the source drive) is very far removed from DB.

Ideally description has steps reproducible on any OS, thus allowing more people to test.
If need be, we could probably ask @JimboJones who I think actually has macOS to run.

I sure hope not. Some people like their file history. We try hard to try to let them keep it…
Beyond that, if a database recreate isn’t there, it impedes disaster recoveries (drive loss).
Other ways such as Duplicati.CommandLine.RecoveryTool.exe are for emergencies only.

In the current case, especially with any large backup, blocksize change needs fresh start.
I heard talk of 12 TB of backups, so keep the current 100 GB rough rule of thumb in mind.

Please change default blocksize to at least 1MB #4629 can boost 100 GB advice to 1 TB.
Despite years of SQL improving, Duplicati still slows down when holding too many blocks.
Solving that is simple (bigger blocksize) if one knows in advance or is willing to fresh-start.