I just started with Duplicati and it looks very nice. I’m getting this error and I’ve looked up issue #1400 but the cause is a bit cryptic. I’m using the Docker image, and have set it up to backup Ubuntu data in a volume (bind mount) to my NAS using FTP. The specific errors I got are listed below, but is there some guidance on what these mean? In the warning below, “/source” is where I store all the mapped volumes for my docker containers, including Duplicati, so it makes sense it’s having trouble backing up its own database. But what do the errors actually mean? Are they a result of Duplicati trying to backup itself too?

Warnings: [
    2019-08-21 18:34:12 +10 - [Warning-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-PathProcessingFailed]: Failed to process path: /source/duplicati/data/Duplicati/73688870807967659085.sqlite-journal
Errors: [
    2019-08-21 18:34:12 +10 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: mu/BJh6mwPKcSq45jD7O1MaQtJt5p7qOo0Umtte8180=, size: 102400, blocksetid: 60533, ix: 2, fullhash: PDEDx9rze7wipdFK20Ckff59nXRtMSAROeu7rxIJUSQ=, fullsize: 423632,
    2019-08-21 18:34:12 +10 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 106527 and hash mu/BJh6mwPKcSq45jD7O1MaQtJt5p7qOo0Umtte8180= and size 77448

Welcome to the forum @michaelblight

You’re confirming what I said in Verification errors - what do these mean?, and maybe saved me a search. Hopefully I pushed thinking about the cause further on that post, and if you’re willing to make the bug report per the directions there, it might really help understand the cause (otherwise it relies more on speculation).

If you’re technically inclined, I can say more, plus you might be able to take the look in DB that I mentioned.

This is one of the common ways to hit this issue (did your search find it?), but usually without a container, however that shouldn’t matter – it’s been thought to happen more on a file that’s changing during backups.

I don’t know Docker path mapping well, but if you are having Duplicati back up its own database journal, that’s an invitation to trouble per file-changes-during-backup. Your job Database page will show whether 73688870807967659085.sqlite is the database for the job. This gives details on what journal suffix says.

Please review the other link, and I’d be happy to point to code, if you code. Theory is short read happens, however file then gets appended so more blocks can be fetched. Short block in the middle breaks things when search is made for a –blocksize sized block of the right has, but only a shorter one could be found. Look at your two errors. It wanted size: 102400, found the expected hash, but found size 77448, so error.

Theory seemed to test out, however after theory comes waiting for fix and release, so avoidance such as described in the “Verification errors” link will have to hold for now. Snapshots on Ubuntu are a little unlikely because I think the default setup doesn’t set up LVM (do you know how you set up?), and if so, then using Screen 3 “Source Data” changes such as unchecking sources or adding Filters can keep Duplicati out of attempting to back up its own rapidly-changing database journal. While it’s sometimes useful to backup a Duplicati DB when it’s inactive (e.g. using a different job) to ease disaster recovery, active DB is little help.

As a Docker side note (and I’m not an expert at all), are you keeping Duplicati’s database in its container? Generally I thought one wants persistent data outside the container so the container can be replaced, e.g. whenever time comes for an upgrade to a newer container. The other option might be to let Duplicati just upgrade itself inside its old container using the same mechanism it would use if it were not in a container, however IIRC there are Docker users in the forum who favor the container-keeps-no-persistent-data way.

If you’re willing to look in your DB after this error, you might see this sort of result (from my test last night):

  • 2019-08-21 23:02:10 -04 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-CheckingErrorsForIssue1400]: Checking errors, related to #1400. Unexpected result count: 0, expected 1, hash: iZFxgvrWfC+8CJIwcfhI+DEeLAkD4HHvwtZF/TJ9baw=, size: 276, blocksetid: 8, ix: 22, fullhash: K8w3a6PSGLR2DsXmDpkaaD7rkQveEJyE1+cxbcS4rIk=, fullsize: 22804
  • 2019-08-21 23:02:10 -04 - [Error-Duplicati.Library.Main.Database.LocalBackupDatabase-FoundIssue1400Error]: Found block with ID 12 and hash iZFxgvrWfC+8CJIwcfhI+DEeLAkD4HHvwtZF/TJ9baw= and size 272


This had 1024 byte blocks, so 22804 bytes usually means 22 * 1024 = 22528 bytes plus 276 at the end.
It expected the block to be 276 bytes at the end of the file, but actually blocks arrived as 272 and 4 bytes. Surprised at the lack of the expected block, Duplicati complained, talking a lot about the numbers above.

Blockset 8 never got its 23rd (partially filled) block, because its row insert inserted 0 rows (instead of 1), because it couldn’t find the block it expected (topic of error messages), because the block got split up…

Test script writer.bat was:

@echo off
echo * >> writee.txt
goto top

You shouldn’t really need to know all this, but you did ask what the errors meant, so I explained a lot. :wink: Another reason this is posted in some detail is so I can point to it from other posts, or a Duplicati Issue.

Thanks - I’m no longer trying to backup the duplicati container, and everything seems to be working fine. I store the duplicati data in a host folder for persistence and map it to the container as a volume.

I am using LVM on the main disk on my system, but never set it up on the one for the docker container volumes. I will try setting up a second duplicati container to backup the first - I assume that if I lose the duplicati database I won’t be able to restore any files it has backed up without the database, correct?

I like to run everything in a docker container - it’s great for isolation; being able to start/stop/upgrade containers independently; administer and access via a web interface using Portainer, etc. And in this case it makes it easy to spin up multiple independent instances of dupliati.

Not exactly, but it’s harder. The fastest way to get something back is direct restore where you say what you want and get a partial temporary database to do it. For continued backups or multiple restores you can use Recreate to rebuild the full database from the backup files. It can be quite slow if you have a large backup…

Exporting a backup job configuration is also good, as the backup is self-describing only for restore purpose. There are no details of passwords, source files, or other backup configuration settings in the backup folder.

FYI CheckingErrorsForIssue1400 and FoundIssue1400Error test case, analysis, and proposal #3868 is the rather technical continuation of what I started earlier here. Maybe someone can actually get this issue fixed.