Recovering from message alteration

That’s not a guaranteed issue. I think the count should match Fileset row count and the Remotevolume Type Files count (ignoring State Deleted). I have 21 expected dlist, which matches my home page’s “21 Versions”.

I forgot about log-retention which will interfere with this and LogData, but default is 30 days, unless changed. Another way to lose old data of this sort is to recreate the database. That gets essentials but not the log info.
Operation table does not seem to be trimmed. Mine goes to Oct 6, but the two trimmed ones go for 30 days.

It might be the RemoteOperation table trimming, but retention trim seems too soon unless you decreased it. Lack of such trimming is another reason why an external log-file is best. It also shows what DB rolls back…

There are some issues where, for example, a compact deletes a dindex, errors out, and rolls back DB data. Actual destination file deletion remains, of course, so next backup thinks it has a missing file, but I digress…

Back to the dlist, your retention policy deletes backup versions, so maybe that’s why there’s no aged dlist. External log at even log-file-log-level=Information is enough to show file deletions. Retry level is a little better.

0 is not exactly an all-went-fine result, but can occur if backup is interrupted. It would list as partial in the GUI.
Converting some Timestamp to UTC, it looks like the Aug 27 backup got deleted by retention. Gap in Fileset:

1629252391 August 18, 2021
1632276432 September 22, 2021

Do you by any chance have log-retention set to 14D?

To have fewer of these (and more verification, and unfortunately more B2 download charges), you can use backup-test-samples or backup-test-percentage to keep a closer eye on the contents of the uploaded files.

The TEST command from Commandline or shell can also test-on-demand as many files as you would like.

I was hoping that more could be gotten from the DB, but there’s too much missing. The damaged dblock is uploaded at end of backup, and even though the dindex files were later, the dblock files might have finished after the dindex did (because they’re larger). I don’t know if future inspection will show issue-at-end pattern.

You can stop file upload from finishing out-of-sequence by setting this option to 1. It may slow upload rates:

  --asynchronous-concurrent-upload-limit (Integer): The number of concurrent
    uploads allowed
    When performing asynchronous uploads, the maximum number of concurrent
    uploads allowed. Set to zero to disable the limit.
    * default value: 4

Free options include increasing log retention, setting up external log file, playing with upload concurrency, looking at damaged files for any interesting patterns (e.g. empty blocks in the middle or at the end of file).

You might be able to correlate errors with something. For example, you might find system boots like this:

 who --boot /var/log/wtmp
         system boot  2021-01-31 19:42
         system boot  2021-01-31 20:13
         system boot  2021-10-08 09:45
         system boot  2021-10-08 10:26
         system boot  2021-11-30 15:14

Would you consider helping any with either SQL queries or issues that might flat-out be SQLite. Scaling to large backups has been a problem, and I’ve seemingly had DB Browser for SQLite read DB for 26 days
There’s other technical info there, and some concerning reports about SQLite performance dropoffs here.