Backend quota is close to being exceeded

Keep in mind that even changed files may only generate small amounts of change data that’s backed up:

Features

Incremental backups
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.

Feature request: List all backup sets with their size shows how to find added, modified, and deleted files.

If you’re willing to dig details out of a general-pupose log, you can set –log-file, –log-file-log-level=verbose (and if you’d rather pre-filter you might be able to use –log-file-log-filter to catch only message like below):

2019-09-10 17:45:15 -04 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\PortableApps\Notepad++Portable\App\Notepad++\backup\webpages.txt@2019-09-06_154119, new: False, timestamp changed: True, size changed: True, metadatachanged: True, 9/8/2019 12:12:06 PM vs 9/6/2019 7:58:56 PM

where any of the tests turning up True means there is the potential for some data about it being uploaded.

Unchecking affects only new backups. You can either wait for older ones to age off, or purge data yourself. Space from deletions is recycled by compact, which by default is automatic, but can be forced if you wish. Depends on how much of a hurry you’re in. Living with an almost-out-of-space condition can get awkward.