How to ignore the "Found x remote files that are not recorded in local storage, please run repair" error

I have Duplicati running as a backup solution on a handful of linux boxes, its remote server is Storj. Each linux box has its own Storj S3 access key with its own bucket. The access key has been set to read and write only, no modify, no delete. I like this method in order to protect against ransomware.

They all have been running fine for over a week. Each linux box has about 300GB of storage to backup, no system files, just user files, about 40,000 files in average, with about 1GB incremental update for every backup run. Backups runs 5 times in a span of 24 hrs.

I’ve noticed over the past few days now a couple or more random linux boxes keep failing with “Found x remote files that are not recorded in local storage, please run repair”, this is normally about 22 remote files.

I run the repair and I can see its trying to delete the unknown duplicai files. I am assuming it created them and somehow failed in a previous run, though no such errors exist in the previous run. I have to either manually delete those files or assign a new S3 key which has access to delete, something I am trying to avoid.

Interestingly, after running repair, even though it wasn’t able to delete those unknown files, it continues to run backups as normal for the next 2 or 3 cycles / days, then it tries to delete the same files again and fail the backup until the repair is run.

I just want to know if there a way for duplicati to just ignore those files it wants to delete and continue with backup? I was thinking that every year or so I’ll let Duplicati run a cleanup, but right now it wants me to do this clean up every week. Not really a good set and forget backup solution.

Any advice would be appreciated!

Welcome to the forum @ultramoo

After interrupted backup, next backup failed with remote files that are not recorded in local storage #4485

is an issue that doesn’t seem to show up much but might be more likely when systems restart. Do they?

Presumably by schedule? If so, then look for missing or odd looking backup logs preceding the issues.
Basically don’t expect the same problem in prior backup, but look at prior backup to see if it might have caused any problem in the next backup, perhaps by being cut short by a system restart or other failure.

What sort of names? The issue I pointed to are dblock files though I don’t know for sure it’s only those.

Duplicati <job> → Show log → Remote can let you look up the history of the names (barring database transaction rollbacks due to system restarts or other failures), however that might take a lot of scrolling.

Easier is to look right in a copy of the database with sqlitebrowser (search the RemoteOperation table). Similar (but more reliable/readable) view is using log-file=<path> log-file-log-level=Information and look.
To just see what files it’s complaining about, warning level will do, even with About → Show log → Live.

Probably no reasonable way if you value backup integrity. Checking backup files is a feature.

no-backend-verification

If this flag is set, the local database is not compared to the remote filelist on startup. The intended usage for this option is to work correctly in cases where the filelisting is broken or unavailable.

Might get at least part of what you ask, but turning off safeties is a risky way to dodge an issue.

Scheduling is run by the Duplicati app, (not cron) the linux servers have not rebooted for months so it can’t be that, and the duplicati service does not crash or restart as far as I can tell. There are no errors prior.

I’ve just had yet another server with the same error pop up. So lets use that one as an example.
The error occurred at 3PM, the last backup it ran was at 10AM with no errors

image

It doesn’t seem to show the Error for the latest one at 3PM here. Don’t know why.

But the error is stated in a pop up:
image

Here is the Remote Tab:

So I do get email logs of the error, and I’ve checked one of the files it had error with:
duplicati-bbbb6f1bdb97e4acf9e03b5dc4adcc0bc.dblock.zip.aes

Turns out this file is from a backup from 5AM, so 2 backups ago.

Looks like its trying to delete some dindex and dblock files. I don’t know why it wants to delete files. I don’t want it to do any cleanup. So I must have some setting somewhere where it is deleting old files. Since S3 Access Key does not have access to delete, this would not delete, and it seems to pick it up 2 backup sessions later.

I just need to figure out why its trying to delete these files.

As far as I can tell, this file was created over a week ago

It must be deleting it as it doesn’t think it needs it anymore.

These are my settings for the backup job:

I need to figure out why it wants to delete this backup file from a week ago and how to stop it. Any ideas?

Assuming that no file upload is interrupted, the only place where files get deleted is during Auto-Compact. This either happens if enough data was deleted from older versions (does not apply to your case), or if many small files can be combined into one. You can see that the deleted file had a size of 12MB instead of 150. This will happen if the changed data does not evenly fit in your volume size, so at the end of a run a smaller one is uploaded.

To disable this, there is the no-auto-compact option. As long as the backend can handle many small files (at most 1 per backup run), that should be fine. You can still run compact manually at a later point, if the small files start to become a problem.

1 Like

There is some guidance in the forum on how to set things up when trying to get an immutable backup.
Turning off compact is one. You already keep all versions, but eventually this may take lots of storage.
People using cold storage such as Glacier have similar issues. Database will also eventually get large.
Current default blocksize of 100 KB is good for about 100 GB of backup, so consider raising that some (which requires a fresh backup), or maybe find some ways to periodically clean, rotate destination, etc.

About → Show log → Live → Warning and pressing the Compact button might be interesting to watch.
If it’s trying a delete and getting denied, it should probably retry about 5 times and then give an error…

There’s one other place where a delete might be tried, and you don’t have much control over behavior.
If an upload has a fatal error, delete of that file name is tried (may fail), and upload done as new name.

Thanks, after setting no-auto-compact to true, it seemed that the backup started to work again, but then it failed about 3 attempts later. It looks like it still wants those files gone first regardless if no-auto-compact is on or off. I suppose that makes sense as its already marked them for deletion.

So I’ve deleted those files and seems like its worked - waiting to see what will happen over the next few day. I hope this resolves to issue, otherwise I think I’ll have to reset/delete the backup and start fresh with no-auto-compact = true.