Backup that fails!

Hi,
I often have alert messages at the bottom of the screen:

Detected non-empty blocksets with no associated blocks!
One or more errors occurred.

The next time, there is no more warning message without me doing nothing. The number of backups is incremented. I don’t worry too much.
I’m a little embarrassed because I won’t be doing backup tests regularly.
Could there be warnings only for chess with as detailed suggestions as possible. In the logs, I don’t know how to list the block list! I understand: file names, see backup date / time

I only applied a regeneration (delete & repair) when I see the alert word “repair” (and a priori it works)

What does “chess” mean? If you mean could that one be a warning, it’s an error because it’s serious.

It’s also probably fixed in a September Canary build which has finally emerged as a new 2.0.5.1 Beta.

If you mean improve warnings, errors, and suggestions, that’s already a recognized weakness, but is extremely broad and has been too much to tackle so far with available volunteers. It’d be really nice…

Hello,
Sorry I do not speak good English and I sometimes use bad terms but I think I understand the operation and the presentation.
Sometimes the backup was well done but not always. It is especially in the most “serious” case where I find it difficult to understand what I should do from the messages at the bottom of the screen.
http://localhost:8300/ngax/index.html#/
I am with the Duplicati version - 2.0.5.1_beta_2020-01-18

I put this thread in support because I realize that several backups have not been made.
I do “delete”, “repair” the database, and I restart the backup manually. Each time, I have the same red warning:

Detected non-empty blocksets with no associated blocks!
One or more errors occurred.

What to do?
Regards

It sounds like you got some damage in the actual backup files. This is what 2.0.5.1 avoids much better, however it’s not any better at reading backups that already got damage. Is start-over too unattractive?

You already know how to delete the database. Deleting the destination will let you backup current files, however old file versions will be lost. If you want them, it will likely require very detailed technical work.

If you’re unsure, you could copy your old backup off, just in case you need something. There is a tool:

Duplicati.CommandLine.RecoveryTool.exe

This tool can be used in very specific situations, where you have to restore data from a corrupted backup.

however it’s probably not something you want to be running all the time, because it’s command line.

My guess at what you’re seeing is this now-fixed issue:

CheckingErrorsForIssue1400 and FoundIssue1400Error test case, analysis, and proposal #3868

Backup fails with CheckingErrorsForIssue1400 and FoundIssue1400Error (as a pair), or maybe "Detected non-empty blocksets with no associated blocks!" appears, but other two may be in log too if one looks. There have been numerous forum and some GitHub reports on both issues, with little progress beyond thinking a file changing during backup causes it. The test here was using the following writer.bat script:

Do you think you might have some files that are growing at time of backup? If so, that could cause this. Windows can use snapshots, but this is not often an option on Linux due to lack of setup at installation.

–snapshot-policy

Hi,
I thank you for your help. nevertheless it is still as difficult for me to understand (technique and language).

A backup that does not succeed happens several times.
Size of the remote volume which is generally 250MB
My work is personal and modifies few documents. I also have few additions or deletions.
Maybe it’s a duplicate problem, hubic or my duplicate configuration?
Maybe it’s a problem with my settings?
Maybe I need to change my strategy?
Since I have another backup on a disk at home, I can delete the remote backup and start from scratch but I hope that it will not happen again because I cannot spend a lot of time on it.
Regards

Be careful of automatic ones you might get, such as browser cache, commonly in ~/.cache.
Duplicati’s own databases are in ~/.config/Duplicati and are modified while backup is going.

Your job log can reveal true extent of changes. Below is an example with few modifications:

image

Only way to tell is to try. Original post said “often have alert messages”. Maybe we’ll know soon.
2.0.5.1 has lots of fixes in it, but there’s not enough information here yet to point to one besides:

Fixed sporadic issue with backups of files being written, thanks @BlueBlock