Duplicati - glacier deep archive errors at finish

Hi,
Iis it possible to fix the error “detected non-empty block sets with no associated blocks” without delete and recreate database?
I have tried to backup several times data (2,3 new tasks) to AWS. The backup takes several days.I get the error as above. If I use repair by deleting and creating a database, I will pay for the restoration to AWS.
Or maybe backup with another tool, maybe it is not suitable?

Info:
Version name: “2.0.5.1_beta_2020-01-18” (2.0.5.1)
SQLite: 3.22.0 - Mono.Data.Sqlite.SqliteConnection
OS: Unix 5.4.51.7
Docker for OMV
Options set in tasks
backup-test-samples 0
no-auto-compact
Also I tried:
remote vol 500MB, 1TB
no-backend-verification

Welcome to the forum @Seboos

Do small backups work? If nothing else, it will be more feasible to study the issue if it’s easily reproduced.
This is not a common failure. I don’t use S3, but it’s definitely used (not sure about Glacier Deep Archive).

I’m unclear of the context. The post talked about new tasks, but in other places sounds like an old backup. Was there a working backup that broke (if so, looking at that failure might help), or is this still at test level?

I’m not personally convinced that Duplicati plus any sort of cold storage is a good idea, especially when a number of hot cloud storage options are available at similar price, but it does depend on your exact need. Duplicati supports the basic S3 API but doesn’t support the API to restore from Glacier back into “live” S3.

I think what I’m saying is that for best results, you might be able to find something optimized for this case. There are no recommendations I know of, but there’s a URL below for forum info, and there’s the Internet.

You’ve found the usually-forum-advised options to prevent Duplicati from doing a lot of downloads, but an
https://forum.duplicati.com/search?q=glacier search could find other comments on workability.

Note that you’re using more storage by not compacting, and you’re reducing reliability by turning off tests both at the individual file level and even a look at the directory to see if everything is there and right-sized.

There are also different ways to get things into Glacier (I think – like I said, I don’t use it), and I’m not sure they’re equally successful. I think you can write directly into Glacier storage class, or you can lifecycle-in.

I’m not certain any of this relates to a “detected non-empty block sets with no associated blocks” though. That’s an internal error check failure found in local database, which is why I ask if small backup also fails.

Certain database issues are hard to fix, and I think this is one of them, so finding cause would be helpful.
Finding cause is helped greatly by simple test steps to fail reliably for developers in generic environment.
This sort of issue usually needs at least very heavy SQL logging, at levels that get into privacy concerns.

Creating a bug report and posting it (or a link to it) is intended to be privacy-secure, while still allowing an analysis to some extent. The damage should be visible, and possibly can be hand-solved by commands after the offending blockset (a.k.a. file) is found. You might need to translate from DB to the original name.

The bug report is a sanitized database snapshot, so doesn’t have all the history of how it got to that state.