All seem to be issues with the database.
2018-12-10 23:04:49 -08 - [Warning-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-PathProcessingFailed]: Failed to process path: /unraid/user/appdata/duplicati/EGWNPSGROB.sqlite-journal ]
2018-12-10 23:04:49 -08 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: Ai7lVk0SAIi+egkI+RNjoifQxjFi9XalPoP/gAIBxtI=, size: 512000, blocksetid: 521803, ix: 8, fullhash: ywxm+FrcVlJsyC1yYRAWfPmm0Z/ql+AtmS9jJGtfxvM=, fullsize: 4650152,
2018-12-10 23:04:49 -08 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 8709596 and hash Ai7lVk0SAIi+egkI+RNjoifQxjFi9XalPoP/gAIBxtI= and size 363232 ]
I get these during the backups. It’s also throwing error warnings while I’m browsing backups, but the logs do not show what those errors are. Backups are still completing and I’m able to browse just fine. I’m still concerned with these though.
So far I’ve ran a database repair, which has had no effect.
The #1400 refers to this issue:
It seemed to happen when reading files that we actively being changed.
Since the file shown is the transaction journal for the backup, I think this is the case as well.
So it’s not really a critical issue? What would you recommend to resolve it?
Are you using a snapshot in your conf?
If you don’t, you should, especially to backup log files.
I suspect the suggestion from @bfontaine (thanks!) will avoid some or all of the noise, but if the intent was to make a backup of the local database in a state that matches the destination files (mismatch is bad), it can’t be done in the same backup that’s running. Some people do add the extra safeguard of a database backup after the main backup, but that’s just a precaution against a lengthy database Recreate if it becomes lost somehow.
Use –snapshot-policy (needs an account in Administrators group), or don’t back it up, or back it up separately.
I’d note that the #1400 errors don’t seem to be well-understood at this time, but there are a worrisome number.
Snapshotting does not work in my setup. I suspect it’s because I’m running Duplicati as a docker container in my unRAID environment.
I didn’t seem to have this issue before the last beta update, if that gives any more info about the cause. No changes were made in my config.
So it sounds like I could exclude /unraid/user/appdata/duplicati from the backup set to get rid of these errors. I have my config saved elsewhere, so I can recover that in the event of a complete failure. However, what would it mean for me if I needed to do a full recovery without that database in appdata being backed up?
Oh, and since I haven’t mentioned it before, I’m running Duplicati in an unRAID environment as a docker container.
Restoring files if your Duplicati installation is lost is where you might start, to get important files restored fast.
Doing Recreate in Database management is where you would go to get the entire database back so that the existing backup destination (with all its versions) can be fully utilized. Recreate can be slow, or can hit errors. People who use Duplicati mainly for disaster recovery might just abandon old versions after restoring recent.
Does it make sense to backup Duplicati config and db files? is a currently running discussion on practices…