After rebuild of DB, Duplicati starts uploading again

Hi all,

I have a quite strange problem with one of my backups. I backuped a 47gb folder on a webdav drive (ionos hidrive). This took quite a long time. However, in the end of the upload, some error was reported and the db has, therefore, not been uploaded to the webdrive.

As the repair did not work, I tried to delete and recreate the db. However, this did not work, as no db was found on the server. As a workaround I found in the forum, I created a dummy backup project with single file and uploaded this to my backup. Afterwards, a recreate of the db seemed to work. I took around three hours and seemed to look through the backup files (925 50mb files).

However, In the end, it showed again an error:

When I tried no to backup, it seems to reupload everything as it seems to recognice every file as new/changed:

I stopped the process for now.

Do you have any suggestions?

Best regards,

Welcome to the forum @Hirlinger

The DB is never uploaded. It’s a local cache of information from the backup at destination.

Got a URL? I can guess, but it’s better to know what you did and maybe what the goal was.
Are you trying to reduce uploads? How important are the older source file versions to you?

Did you notice how far the progress bar got on Recreate? Final 10% is a complete search,
looking for data that belonged to a file that’s in the backup. It’s not normal, but may happen.

What’s in screenshot background? That looks like the end of a successful backup. Earlier?

Depending on what you did, you might have deleted prior file information so they all look new.
How about clarifying via URL, link image button, or description what you’ve been doing in this?

Be careful how you stop. Best is the stop button, then “Stop after current file”, not “Stop now”.

Thanks! And thanks a lot for looking into this post!

I meant the file “…”

last post…

Yes, it went quite fast to approx. 90% and stayed there for three hours. On the log, I saw that it went through every of the 925 50mb files

In the background is the end of the repair operation (after three hours).

I just deleted the local database (gui advances → maintanance → delte). The dlist file never seemed to be on the server side due to the error after having uploaded the whole backup.

Can you elaborate on “clarifying via URL”? Is there a special log technique?

Thanks, I will keep this in mind. Did so this time…

That can get what you got. The dlist can reference dblocks that aren’t there, so everything gets searched.
How the backup process works explains some terminology and concepts, if you like, but it’s not essential.

Were you trying to fix a “No filelist found on the remote destination”? Missing one dlist can sometimes be resolved simply by running another backup which can pick up where the last one got cut off, then finish it.

There’s even supposed to be a synthetic filelist uploaded at start of backup to show progress before past interruption, and that’s working in Canary, but the latest Beta has a bug where it tries to do that, but fails…

If you never uploaded an initial dlist, that’s a different problem because having none is a hard starting spot.
How many times has the backup run before? If it ran before, then you likely have some old dlist files there.
Looking at the destination will also work, and you might even find a file like this which might be an old dlist:

Look in the background for a file from that time of around 88 KB. The last file uploaded is often the dlist file.

image means “The operation Backup has completed”, so that doesn’t fit, but maybe it was leftover from earlier time, and the live log hadn’t restarted.
You can try looking at the server and job logs to figure out what operation was really going on at what time.

Is this the error described as “some error”? That’s not enough to work from. Any errors in the logs I gave?

Posting URL to help understand what you did, but you posted one already below where I said “Got a URL”, and you even did it the fancy way, instead of using the link button that I posted as one way to post the URL.

You can see on the Home page how many backup versions Duplicati made for a given backup. Example:


Are we trying to solve a first-backup situation, or a failure with ongoing backups? If ongoing, how valuable are older versions? There are a variety of approaches ranging from just starting again, to deleting newest backup in the hope that earlier ones are OK, to Recreate tricks to save full data upload, and so on. To find what happened on the error would be nice, but can’t happen without some additional info, e.g. from logs…

Hi ts,

Thanks again! In order to avoid confusion, I start from the beginning (and as I finally found the logs, there are a bit more details):

  1. It is a first time backup (so only 1 version), 47gb picture data. Backup process stopped when nearly completed (possibly when performing a test after full backup?):

  2. restart did not work, same error

  3. “repair” did not work:

  4. “backup” did not work either:

  5. I think that at this stage (however, not sure), I deleted the db and got the “no filelist” error

  6. after having uploaded a dummy filelist, repair process could be startet. It ended three hours later with this error:

Afterwards, these messages when trying to repair:

Afterwards it restartet upload files to the destination.

Unfortunately, in the meantime, I have deleted the new files on the destination (from 7.February onwards) so that only the “old” pure backup

Yes. It’s checking its work in a routine called VerifyConsistency, which unfortunately found an issue that probably should not have happened. It’s an internal inconsistency. What is modification time on the file?
There possibly is some timing hole that this fell into, and if was (thankfully) one file, let’s examine it well.

The Repair then fell into the unfortunate spot that Repair is only willing to run when there’s backup there. Avoiding this gap can be done by doing initial backup in increments, so if it fails then less time is wasted.

We’ve gone over one reason why Recreate might have to look for everything – you moved in a dlist with single file not from this backup, however if it was a single file that should have been in this backup, then something else was missing. Well, actually, hearing there was no dlist until then, it’s likely that one file…

There’s a variation on the idea that uses a dlist for a zero-file backup that might persuade Repair to work. You also know the one (?) offending file, so possibly a purge of the file would work, but sometimes errors prevent repair methods from working, so I’ll hold off details while you look for any clues from the .jpg file.

I’m still not hearing much about your goals, except that original backup “took quite a long time”. What are your upload and download speeds if you know. The three hour download was maybe faster than upload?
Recovery approach can be somewhat tailored around your preferences, after some experiments here…

Please also read the previous note.

Do you recall what sort they were? The only thing I can tell from history trail is perhaps the dummy dlist. Did anything else go up? Are there any old local databases laying around, e.g. from manual copies of it?

If the only thing left is a bunch of dindex and dblock files, then the only way to get database back without uploading all the data is a variation of what you tried earlier, except using a zero-file dlist for better safety.

Having zero files should make it impossible for Duplicati Recreate to want to read all the big dblock files. There should be a download of the tiny zero-file dlist, and whatever relatively small dindex files are there.

After Recreate is done, run the backup, and it should use the blocks from dindex files to avoid uploading.
When done, a new dlist file would be uploaded, pointing primarily to source blocks already at destination.

Does that sound reasonable? If so, I’ll run through the procedure I just created, to double-check it works.

What I deleted was only the scrap what was uploaded the next day as the old backups has not been uploaded. There is no mixup possible as the date of the files was clearly to be distinguised from the old ones.

I followed your advise and uploaded a zero dlist file. However, now, it start to read through the dblock files as well. I let you know as soon as I have a result (will take some hours probably…, currently 27 of 925 50mb files…).

PS: Upload speed is 10MBit/s while download is 50MBit/s

I didn’t give the recipe. How did you go about making a dlist file for zero files? It’s somewhat complicated.
You can confirm that you succeeded in backing up zero files by Restore showing a date but no contents.


What I did was adding an empty folder:

However, it is still the folder being present, right?

Anyway, it did not work either, same error in the end of the repair and after a purge-broken-files, the upload startet again (and - weird - all the old backup files have been removed from the remote).

So, finally, I gave in and started a fresh backup…

Thanks a lot for your very helpful support! This increased my trust into duplicati even if it did not work in the end…

It is present and in the backup, and that’s the issue.

An empty folder is still part of the backup (as you see on the Restore screen), and while it doesn’t have data the way a file does, it has metadata such as its timestamp and permissions, to use for the restore. Such information is stored in blocks on the remote, just like data content, so dlist looks, but doesn’t find.

I could have been more accurate in how zero it needed to be, but there are many types of file-like things potentially in a backup, especially on systems like Linux which are more likely to be using symbolic links.

The basic plan I had in mind for the zero-entry (using a more generic term this time) was to use options allow-missing-source to forgive an intentionally bad single source path, and upload-unchanged-backups which makes it do backup anyway. You get a dlist. There is no dblock or dindex because it needed none.

Because the dlist has no required blocks, it can safely be moved to the backup that needs a dlist without causing an exhaustive search for data that was left behind in the dblock in the folder it came from, failing:

The rest is not what I would have suggested. Not much point in nailing down exact steps now, but it would have been something like enabling no-auto-compact to prevent accidental cleanup deletions due to empty dlist not needing any dblock files, and running the backup to reattach files to their blocks already uploaded.
After that, turn off no-auto-compact because normally it’s a good thing to keep space waste under control.

and hopefully it will work well, and last a long time. The inconsistency error is not normal, but perhaps an action took place that Duplicati is not yet good at handling. If you did “Stop now”, it should be safer in next Beta. It’s somewhat possible to track these down, but it takes an enormous log of all the database action.