Unable to continue my backups due to File Inconsistency - Please help

Hi,

Since a few weeks I’ve completely unable to use Duplicati. I use the Jottacloud backend and the lastest Beta version. Everything used to work fine and I have upload more than 1tb across several different backup jobs. But these last weeks, I’ve been unable to complete even a single backup. Everytime I start one, the backup fails with this error:

GetRequestStream timed out

Sometimes I can try again, but mostly, the next time I run it I get this error:

Found inconsistency in the following files while validating database: ***********, actual size 633695364, dbsize 0, blocksetid: 19524 . Run repair to fix it.

(The *********** stands for the actual file name.)

There is nothing I can do to fix this error. Repairing the database doesn’t work and the purge-broken-files command finds nothing wrong to fix. I can’t do nothing with the program anymore, because I can not risk any of the remaining backups on risk of getting then stuck like that. Has anyone experienced something like this?

I provided more detail in this bug report.

You mention that you are backing up to Jottacloud. Can you rule out that this is a problem with Jottacloud?

I can access the files normally through the web interface. I guess I would need some other user to confirm if Duplicati is working with Jotta.

Anyway, the big problem is the other error (Found Inconsistency…). After I get a single error like that I can not even try anymore, and there is nothing I can do to fix my backups.

Even better would be to test if your backup job works with a different storage provider…

TBH, I have no idea whether or not this is a Duplicati or a Jottacloud issue, I am just suspecting that it might be Jottacloud because it has previously been found to be very slow (just search Jottacloud on this forum), which is not surprising, given its unlimited plan.

However, even if something is wrong with Jottacloud, I do agree that duplicati could do a better job in handling that situation:

The problem I see, regardless of the backend provider, is that my backups are ruined as a consequence of the time out error.

Once I get the Found Inconsistency bug, that’s it. The backup is blocked forever, I can’t do nothing, can’t even try uploading again. Repairing doesn’t work, purge-broken-files doesn’t work, months of uploading are locked in a state that I can no longer update.

Even if there is a problem with Jotta, I should get only a connection error, and be able to resume my backups afterwards, not be told that there is a problem with a file with no way to repair it.

Sorry if I sound like ranting, but I simply don’t know what to do.

I’m not an expert on duplicati, but from all I understand, that is very unlikely. I understand the frustration when repair doesn’t help (I’ve been there myself earlier this year). But if the db can’t be repaired, it can always be rebuilt. I suggest you move the corrupted db to some other place and run the backup job again. Since duplicati can’t find a db but finds files that have been backed up, it will rebuild the database (right @JonMikelV?). Beware, however, that this can take some time:

Sorry, let me specify that:

I guess it’s better if you copy the db somewhere else (you probably wont need it, but just to be safe) and then hit Recreate (delete and repair)

Essentially yes, but it will happen more like this (assuming the .sqlite file is gone):

  1. It will start with “Verifying backend data…” (at which point it will create an “empty” .sqlite file)
  2. It will then complain about “Found ### remote files that are not recorded in local storage, please run repair”
  3. You’ll then have to use the job menu “Advanced → Database…” and select “Repair” at which point it will process all the remote dlist (and maybe dindex) files to rebuild the local database

However, the initial error of GetRequestStream timed out makes me feel that you’ll still have issues because your provider is timing out on file requests. This can happen if you have a very large backup (or small dblock size) resulting in a lot of remote files that the provider takes so long to list that things time out.

If that’s really what’s going on then you’ll likely find that the timeout errors come almost exactly 10 min. after the previous step runs as that is the default HTTP request timeout. You could try upgrading to (or running a portable copy of) the latest Canary which gets you access to the --http-operation-timeout parameter which let’s you tell web requests to wait longer than the default (I think 10m) before timing out:

If you’re totally fed up and just want to get to your backup you could download all your remote destination files to a local drive and try a repair or restore from there. Once the potential issues with remote requests and internet speeds have been removed I expect you’ll find things work just fine and you won’t have lost anything in your backup.

@kenkendk, is there a way to do a local database rebuild from manually downloaded dlist and dindex files only, or are dblock files needed as well?

1 Like

Thanks, guys, I will try your suggestions. I really appreciate your assistance.

About the backup size, though, the problem happened with a 512gb backup and a 21gb, so I don’t think that is the issue. The block size is small, though, I left it at the default, 50mb, which I assumed would be the recommendation. Is it possible to change it after the fact, or the whole backup must stay at the same block size?

Edit: an additional question. I upgraded to Canary to set the new option. Should I also set the retry-delay to something else?

Wee!

The backups are running again! Recreating the database fixed the inconsistency error and the http-operation-timeout option made the uploading work again. I was so afraid deleting the dabase would force me to start from scratch… I’m now recreating the database for the larger backup.

Thanks a lot!

Now, if I can pester you a little bit more, I still have these questions:

  1. What should I set http-operation-timeout to? I set it to 30min.
  2. Should I set ‘retry-delay’ to anything?
  3. Can I change from the 50mb block size after the fact?

Thanks again!

Glad to hear it’s working again for you!

  1. I’d set the http-operation-timeout setting to as low as you can go without having issues, though it won’t matter if it’s longer than necessary. The timeout only matters when the destination takes longer than that to reply. The reason it’s not defaulted to really long is that it would take that long for Duplicati to come back and report an error which would frustrate some users.

  2. I believe the --retry-delay defaults to 30s but it’s just the amount of time Duplicati waits before trying a failed step again - and the step is retried --retry-count number of times, defaults to 5). This is useful for things like destinations that have a long delay between a file being uploaded and being visible to Duplicati.

  3. That depends on what you mean, 50MB is the default dblock (archive file) size which you can change after the fact however the default *block (hash) size is 100KB and that CAN’T be changed. Note that if you change the dblock size it will start using the new dblock size going forward. Older files will stay their smaller size until they are re-processed as part of the normal old version cleanup process (assuming you haven’t set your backup to store versions forever).
     
    If that REALLY bugs you, you could probably fiddle with the --small-file-size (make it smaller 50MB) and --small-file-max-count (set it to 1) parameters to trigger a cleanup of the older / smaller. But if you do that, be sure to put the settings back to their defaults. :slight_smile:
     
    Note that a larger dblock size means more time / bandwidth / temp space will be spent during the post-backup verification step, which is why the default is set at 50MB. If doing local / LAN backups a larger dblock size is likely to give better performance.

Thanks, JonMikelV, I think I understand now. Since everything is working, I will leave the settings as they are.

And thanks to tophee as well!

BTW: A good way of showing your appreciation for a post is to like it: just press the :heart: button under the post.

This also helps the forum software distinguish interesting from less interesting posts when compiling summary emails or showing search results.

I agree, this is how it should be (and how I expect it to work).

If you get the error again, can you try to make a “bugreport” (an obfuscated copy of your local database) before doing the “delete-n-repair” ?

Maybe I can figure out why it is ending up in this state. If not, I think I can figure it out anyway. It has to do with a file being scheduled for upload, upload fails and the remote file is purged from the database, but somehow the file-list is already created and it mentions these missing bits.