Hash and size differs on dblock

If I follow this, Duplicati intended to write 221559661 bytes, but after backup the verification file list showed 171966464, which this time is not an even power of 2 size. I suppose you could look at the start of the bad NAS file (maybe even on the NAS, in case the client has a different view) to see if it starts with AES, and if that’s true, then it might be intact as far as it wrote. Why it stopped is a good question, but CIFS has bugs.

[Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError] is one such, where the problem is verified (by test) fixed in late kernel versions, but not by all vendors that do backports into older kernels in the hope of keeping things the-same-but-better. Enterprise and LTS kernel favor that…

openSUSE Leap 15.3 Bridges Path to Enterprise says it’s like May’s SUSE Linux Enterprise Linux 15 SP3.
There’s a small chance that a kernel fix helps CIFS. If you intend to update to 15.3 anyway, you would see.

Other than that, debugging options such as strace are possible but very cumbersome on very rare issues.

You might need to test a different way of putting files onto the NAS, e.g. perhaps it supports NFS or SFTP?

Compact never shrinks an existing file. It writes a new one then deletes the old ones that fed the compact.
You can see this in the operations if you watch them. You can prevent compact from running by using the no-auto-compact option, and you can also see the compact decision in a log at or above Information level:

2021-05-22 11:42:55 -04 - [Information-Duplicati.Library.Main.Database.LocalDeleteDatabase-CompactReason]: Compacting because there is 25.47% wasted space and the limit is 25%

You can also look in the old logs to see how often it does compact. It might not be a commonly done thing.

Sometimes when I really want to catch files, I just run some kind of a copy loop of dup-* to another folder.
Because your files are pretty big, you could also just poll a directory listing, and guess from time and size.
Although it would be slow and might not add a lot, you could even sha256sum files for future Base64 test.

If you want to do copies but have limited space, you might be able to age off the oldest ones, and try to do faster Duplicati fail with list-verify-uploads, which doesn’t seem much used, but might do a timely verify…

--list-verify-uploads = false
Verify uploads by listing contents.

Another way to limit redundant copying is to only copy files if the source timestamp got newer, e.g. using

   -u, --update
         copy only when the SOURCE file is newer than the destination file or when the destination file is missing

You can log system calls, but it will be a lot of lines. You can see an example here from CIFS bug chase.

There’s no Duplicati level logging of the file copy. It looks like a single call to File.Copy Method, done here: