I have a quite strange problem with one of my backups. I backuped a 47gb folder on a webdav drive (ionos hidrive). This took quite a long time. However, in the end of the upload, some error was reported and the db has, therefore, not been uploaded to the webdrive.
As the repair did not work, I tried to delete and recreate the db. However, this did not work, as no db was found on the server. As a workaround I found in the forum, I created a dummy backup project with single file and uploaded this to my backup. Afterwards, a recreate of the db seemed to work. I took around three hours and seemed to look through the backup files (925 50mb files).
That can get what you got. The dlist can reference dblocks that aren’t there, so everything gets searched. How the backup process works explains some terminology and concepts, if you like, but it’s not essential.
Were you trying to fix a “No filelist found on the remote destination”? Missing one dlist can sometimes be resolved simply by running another backup which can pick up where the last one got cut off, then finish it.
There’s even supposed to be a synthetic filelist uploaded at start of backup to show progress before past interruption, and that’s working in Canary, but the latest Beta has a bug where it tries to do that, but fails…
If you never uploaded an initial dlist, that’s a different problem because having none is a hard starting spot.
How many times has the backup run before? If it ran before, then you likely have some old dlist files there.
Looking at the destination will also work, and you might even find a file like this which might be an old dlist:
Look in the background for a file from that time of around 88 KB. The last file uploaded is often the dlist file.
means “The operation Backup has completed”, so that doesn’t fit, but maybe it was leftover from earlier time, and the live log hadn’t restarted.
You can try looking at the server and job logs to figure out what operation was really going on at what time.
Is this the error described as “some error”? That’s not enough to work from. Any errors in the logs I gave?
Posting URL to help understand what you did, but you posted one already below where I said “Got a URL”, and you even did it the fancy way, instead of using the link button that I posted as one way to post the URL.
You can see on the Home page how many backup versions Duplicati made for a given backup. Example:
Are we trying to solve a first-backup situation, or a failure with ongoing backups? If ongoing, how valuable are older versions? There are a variety of approaches ranging from just starting again, to deleting newest backup in the hope that earlier ones are OK, to Recreate tricks to save full data upload, and so on. To find what happened on the error would be nice, but can’t happen without some additional info, e.g. from logs…
Yes. It’s checking its work in a routine called VerifyConsistency, which unfortunately found an issue that probably should not have happened. It’s an internal inconsistency. What is modification time on the file?
There possibly is some timing hole that this fell into, and if was (thankfully) one file, let’s examine it well.
The Repair then fell into the unfortunate spot that Repair is only willing to run when there’s backup there. Avoiding this gap can be done by doing initial backup in increments, so if it fails then less time is wasted.
We’ve gone over one reason why Recreate might have to look for everything – you moved in a dlist with single file not from this backup, however if it was a single file that should have been in this backup, then something else was missing. Well, actually, hearing there was no dlist until then, it’s likely that one file…
There’s a variation on the idea that uses a dlist for a zero-file backup that might persuade Repair to work. You also know the one (?) offending file, so possibly a purge of the file would work, but sometimes errors prevent repair methods from working, so I’ll hold off details while you look for any clues from the .jpg file.
I’m still not hearing much about your goals, except that original backup “took quite a long time”. What are your upload and download speeds if you know. The three hour download was maybe faster than upload?
Recovery approach can be somewhat tailored around your preferences, after some experiments here…
Do you recall what sort they were? The only thing I can tell from history trail is perhaps the dummy dlist. Did anything else go up? Are there any old local databases laying around, e.g. from manual copies of it?
If the only thing left is a bunch of dindex and dblock files, then the only way to get database back without uploading all the data is a variation of what you tried earlier, except using a zero-file dlist for better safety.
Having zero files should make it impossible for Duplicati Recreate to want to read all the big dblock files. There should be a download of the tiny zero-file dlist, and whatever relatively small dindex files are there.
After Recreate is done, run the backup, and it should use the blocks from dindex files to avoid uploading.
When done, a new dlist file would be uploaded, pointing primarily to source blocks already at destination.
Does that sound reasonable? If so, I’ll run through the procedure I just created, to double-check it works.
What I deleted was only the scrap what was uploaded the next day as the old backups has not been uploaded. There is no mixup possible as the date of the files was clearly to be distinguised from the old ones.
I followed your advise and uploaded a zero dlist file. However, now, it start to read through the dblock files as well. I let you know as soon as I have a result (will take some hours probably…, currently 27 of 925 50mb files…).
PS: Upload speed is 10MBit/s while download is 50MBit/s
I didn’t give the recipe. How did you go about making a dlist file for zero files? It’s somewhat complicated.
You can confirm that you succeeded in backing up zero files by Restore showing a date but no contents.
It is present and in the backup, and that’s the issue.
An empty folder is still part of the backup (as you see on the Restore screen), and while it doesn’t have data the way a file does, it has metadata such as its timestamp and permissions, to use for the restore. Such information is stored in blocks on the remote, just like data content, so dlist looks, but doesn’t find.
I could have been more accurate in how zero it needed to be, but there are many types of file-like things potentially in a backup, especially on systems like Linux which are more likely to be using symbolic links.
The basic plan I had in mind for the zero-entry (using a more generic term this time) was to use options allow-missing-source to forgive an intentionally bad single source path, and upload-unchanged-backups which makes it do backup anyway. You get a dlist. There is no dblock or dindex because it needed none.
Because the dlist has no required blocks, it can safely be moved to the backup that needs a dlist without causing an exhaustive search for data that was left behind in the dblock in the folder it came from, failing:
The rest is not what I would have suggested. Not much point in nailing down exact steps now, but it would have been something like enabling no-auto-compact to prevent accidental cleanup deletions due to empty dlist not needing any dblock files, and running the backup to reattach files to their blocks already uploaded.
After that, turn off no-auto-compact because normally it’s a good thing to keep space waste under control.
and hopefully it will work well, and last a long time. The inconsistency error is not normal, but perhaps an action took place that Duplicati is not yet good at handling. If you did “Stop now”, it should be safer in next Beta. It’s somewhat possible to track these down, but it takes an enormous log of all the database action.