Moving to another storage provider is the basic recipe, but it might be easier to move something else instead. Duplicati needs some room to free up space, because it does it with an operation called “compact” described here for the commandline, but probably more typically run automatically per Compacting files at the backend.
There’s a log of what one that I did looked like at the bottom of this post (giving some log data for a problem). You can see how it downloads data blocks (50MB default size, but I don’t know if you configured yours larger) then uploads the compacted result, then (and only then) deletes the original files whose contents it repacked.
Commonly one will see a compact run after a delete does (assuming your job options have a retention setting that allows deletions), but the deletion itself just records in the database that some space is no longer in use, then the space analysis done by compact decides it’s time to repack (which needs space to free up space…).
Block-based storage engine is a short piece on Duplicati’s design, if that helps to understand my description.
How to limit backups to a certain total size? might be useful to avoid future overfills, if your Duplicati is recent.
Very much agreed, and even Duplicati has to follow the process I described. I hope things are in reasonably good shape after things filled, but the best path is probably to free up space then see if things appear OK… Duplicati is usually fairly resistant to things being interrupted mid-backup. On the next backup it looks to see what the destination storage actually looks like. This sometimes looks like a backup-from-scratch but it’s not.
Using the Command line tools from within the Graphical User Interface is another way to force a delete when space is too tight to rely on the usual delete after the backup. After that, I guess you’d do a manual compact. But before that, give Duplicati room to upload compacted files, maybe by temporarily moving other NAS files.