"Failed to connect: The database file is locked database is locked" during "Deleting unwanted files …" never terminates

Welcome to the forum @danblakemore

There’s no such path, although the result sounds more like Viewing the log files of a backup job.
That would start at the job’s Show log. Logs are in job database, but it’s busy during the backup.
The other reason not to look for the job’s result log is that it isn’t there until its backup is finished.

Viewing the Duplicati Server Logs appears to be miswritten, but it’s aiming at About → Show log.
From there you can click either Server for failed operations that didn’t make a job log, or Live for scrolled updates on what’s going on. For your situation, Verbose might be good. Profiling is more.
Information is less, but if it’s actually Compacting files at the backend, you’ll see some files move.

The only mention of the database lock is when (it seems) you asked for logs from locked database.
Was there some situation where you see a complaint without trying to get some logs you can’t get?

I’m not sure this would have bothered Linux. I think Windows is stricter. Regardless, exclude the DB.
The active DB is instantly out of date with the destination if you grab it while its own backup is active.
Out of date DB is useless or worse. Can damage destination if put in use and then a Repair gets run.

You could also see if this event is stuck or if the Processed numbers are going up. That’s a big backup.
I’m hoping you set up a scaled up blocksize. 5 MB might be nice. Default gets slow after about 100 GB.

EDIT 1:

These statements sound contradictory. First says initial backup never finished. Second is incremental backup, suggesting initial backup finished and now incrementals are busy, possibly in a long compact.

EDIT 2:

This is confusing too. If it never finishes, how do you start over and hit it again? If the situation is that initial backup actually completed, some incrementals completed, then they stopped completing, that’s behavior that would suggest the wasted space built up and compact finally ran, but didn’t run too fast.

Long compacts usually happen when version deletions release blocks, and that’s based on retention.
no-auto-compact option can stop compacts. A manual compact can be run with Compact now button.
Compact has its own section in the job log, so you can see what sorts (if any) ran before. Some don’t repackage partially filled files into filled ones. An easier case is when whole file is deleted. Log shows.

EDIT 3:

No statement on I/O activity (or not). Database lock situation needs clarification, but may be unrelated. Spinning claim could be slow SQL if you’re on default block size (are you?). Profiling log might show it. Sometimes SQL causes no I/O because it has a memory cache. Other times, it may actually need I/O. SQLite is not threaded in a way that it can eat your whole CPU. If you have a quad, 25% is about right.

OS can also soak up I/O. I currently have a slow large backup with 14 TB of SQLite reads, and almost nothing is actually hitting the hard drive, fortunately. This is a USN issue, so it can’t be your situation…