Since a few weeks I’ve completely unable to use Duplicati. I use the Jottacloud backend and the lastest Beta version. Everything used to work fine and I have upload more than 1tb across several different backup jobs. But these last weeks, I’ve been unable to complete even a single backup. Everytime I start one, the backup fails with this error:
GetRequestStream timed out
Sometimes I can try again, but mostly, the next time I run it I get this error:
Found inconsistency in the following files while validating database: ***********, actual size 633695364, dbsize 0, blocksetid: 19524 . Run repair to fix it.
(The *********** stands for the actual file name.)
There is nothing I can do to fix this error. Repairing the database doesn’t work and the purge-broken-files command finds nothing wrong to fix. I can’t do nothing with the program anymore, because I can not risk any of the remaining backups on risk of getting then stuck like that. Has anyone experienced something like this?
Even better would be to test if your backup job works with a different storage provider…
TBH, I have no idea whether or not this is a Duplicati or a Jottacloud issue, I am just suspecting that it might be Jottacloud because it has previously been found to be very slow (just search Jottacloud on this forum), which is not surprising, given its unlimited plan.
However, even if something is wrong with Jottacloud, I do agree that duplicati could do a better job in handling that situation:
The problem I see, regardless of the backend provider, is that my backups are ruined as a consequence of the time out error.
Once I get the Found Inconsistency bug, that’s it. The backup is blocked forever, I can’t do nothing, can’t even try uploading again. Repairing doesn’t work, purge-broken-files doesn’t work, months of uploading are locked in a state that I can no longer update.
Even if there is a problem with Jotta, I should get only a connection error, and be able to resume my backups afterwards, not be told that there is a problem with a file with no way to repair it.
Sorry if I sound like ranting, but I simply don’t know what to do.
I’m not an expert on duplicati, but from all I understand, that is very unlikely. I understand the frustration when repair doesn’t help (I’ve been there myself earlier this year). But if the db can’t be repaired, it can always be rebuilt. I suggest you move the corrupted db to some other place and run the backup job again. Since duplicati can’t find a db but finds files that have been backed up, it will rebuild the database (right @JonMikelV?). Beware, however, that this can take some time:
Essentially yes, but it will happen more like this (assuming the .sqlite file is gone):
It will start with “Verifying backend data…” (at which point it will create an “empty” .sqlite file)
It will then complain about “Found ### remote files that are not recorded in local storage, please run repair”
You’ll then have to use the job menu “Advanced -> Database…” and select “Repair” at which point it will process all the remote dlist (and maybe dindex) files to rebuild the local database
However, the initial error of GetRequestStream timed out makes me feel that you’ll still have issues because your provider is timing out on file requests. This can happen if you have a very large backup (or small dblock size) resulting in a lot of remote files that the provider takes so long to list that things time out.
If that’s really what’s going on then you’ll likely find that the timeout errors come almost exactly 10 min. after the previous step runs as that is the default HTTP request timeout. You could try upgrading to (or running a portable copy of) the latest Canary which gets you access to the --http-operation-timeout parameter which let’s you tell web requests to wait longer than the default (I think 10m) before timing out:
If you’re totally fed up and just want to get to your backup you could download all your remote destination files to a local drive and try a repair or restore from there. Once the potential issues with remote requests and internet speeds have been removed I expect you’ll find things work just fine and you won’t have lost anything in your backup.
@kenkendk, is there a way to do a local database rebuild from manually downloaded dlist and dindex files only, or are dblock files needed as well?
Thanks, guys, I will try your suggestions. I really appreciate your assistance.
About the backup size, though, the problem happened with a 512gb backup and a 21gb, so I don’t think that is the issue. The block size is small, though, I left it at the default, 50mb, which I assumed would be the recommendation. Is it possible to change it after the fact, or the whole backup must stay at the same block size?
Edit: an additional question. I upgraded to Canary to set the new option. Should I also set the retry-delay to something else?
The backups are running again! Recreating the database fixed the inconsistency error and the http-operation-timeout option made the uploading work again. I was so afraid deleting the dabase would force me to start from scratch… I’m now recreating the database for the larger backup.
Thanks a lot!
Now, if I can pester you a little bit more, I still have these questions:
What should I set http-operation-timeout to? I set it to 30min.
Should I set ‘retry-delay’ to anything?
Can I change from the 50mb block size after the fact?
I’d set the http-operation-timeout setting to as low as you can go without having issues, though it won’t matter if it’s longer than necessary. The timeout only matters when the destination takes longer than that to reply. The reason it’s not defaulted to really long is that it would take that long for Duplicati to come back and report an error which would frustrate some users.
I believe the --retry-delay defaults to 30s but it’s just the amount of time Duplicati waits before trying a failed step again - and the step is retried --retry-count number of times, defaults to 5). This is useful for things like destinations that have a long delay between a file being uploaded and being visible to Duplicati.
That depends on what you mean, 50MB is the default dblock (archive file) size which you can change after the fact however the default *block (hash) size is 100KB and that CAN’T be changed. Note that if you change the dblock size it will start using the new dblock size going forward. Older files will stay their smaller size until they are re-processed as part of the normal old version cleanup process (assuming you haven’t set your backup to store versions forever).
If that REALLY bugs you, you could probably fiddle with the --small-file-size (make it smaller 50MB) and --small-file-max-count (set it to 1) parameters to trigger a cleanup of the older / smaller. But if you do that, be sure to put the settings back to their defaults.
Note that a larger dblock size means more time / bandwidth / temp space will be spent during the post-backup verification step, which is why the default is set at 50MB. If doing local / LAN backups a larger dblock size is likely to give better performance.
I agree, this is how it should be (and how I expect it to work).
If you get the error again, can you try to make a “bugreport” (an obfuscated copy of your local database) before doing the “delete-n-repair” ?
Maybe I can figure out why it is ending up in this state. If not, I think I can figure it out anyway. It has to do with a file being scheduled for upload, upload fails and the remote file is purged from the database, but somehow the file-list is already created and it mentions these missing bits.