Is this an initial backup, an old backup that suddenly had trouble, or what?
I’ll assume that’s the storage type you chose. Backblaze also supports S3 these days.
If you mean the Test connection, it’s a very basic check, usually just login for file list.
RemoteListAnalysis would also be looking at the file list, and reacting to what’s seen.
Backblaze seems like it doesn’t like the file deletion. It can be set up to block those, but default for an application key would not do that. Did you configure key in a special way?
If it’s been awhile and you forgot, you could set up another key to see if it can do delete.
You can also test put and delete using BackendTool and some filename named so it doesn’t look like a Duplicati file (typically beginning with duplicati-). For a full test, the BackendTester can test an empty folder. Start with Export As Command-line for a URL.
For a better view of what leads up to the failed Delete, watch About → Show log → Live → Information. There’s presumably a reason for a delete, but why B2 refused is the mystery.
Specifically this part says that Duplicati does not have permissions to delete files on B2.
If a transfer fails, Duplicati will make a new filename (never overwrite) and repeat the upload.
In some cases the failure will leave a (partial) file on the remote storage that needs to be deleted.
Could it be the case here that Duplicati wants to delete that specific file, but you do not grant it permission to do so?
Thanks @kenkendk. As part of my protection against malware somehow deleting my backups, I configured BackBlaze to never allow deletes by anyone for 1 year. So, yes, BackBlaze is likely denying delete.
But I configured Duplicati to never try to delete anything
Not really. You configured it to not delete backup versions. Read what was said before:
If an individual file needs to be deleted, it is marked in the database and (I think) deleted exactly as per original post, as part of cleanup before the backup itself is allowed to start.
Have we heard the file name yet? You can probably get it from a live log or a disk log-file. Best case would probably be it’s a dblock file, because dlist and dindex want things.
EDIT 1:
If you like, you can get a tool like DB Browser for SQLite to look at (safest to use a copy) your job Database. If Remotevolume table has a row with Deleting as its State, that’s probably the file that is trying to be deleted. You can make Duplicati stop trying a cleanup, however depending on what type of file it is, and how bad it is, it may be a future problem.
Although individual file retries (and delete of old tries) can happen, another delete source is compact which can try to coalesce and delete files that are too small (level is configurable):
--small-file-size=<int> Files smaller than this size are considered to be small and will be compacted with other small files as soon as there are <small-file-max-count> of them. --small-file-size=20means 20% of <dblock-size>.
This can be avoided by setting the no-auto-compact option (which doesn’t stop efforts for cleanup of bad files). It’s sort of possible to guess which case you hit, especially if you look at the database to see what it’s wanting to delete. One file may suggest single upload error. Timing is another clue. Compact would run after the backup, but if deletion fails, retry would happen before next backup, so you’d have to go back in logs to look for first delete refusal.