Hangs while trying to "Re-creating missing index file"

Hi,

My backup have been failing for some time now, and today I finally had the time to look into it. It seems to me like it hangs when it is in the process of re-creating a missing index file. Can someone suggest something to test, to make it continue.

I’m now on v.2.0.4.16, but I used to be on the current beta until tonight. I believe I had the same issue.
Destination is Jottacloud.

The log says the following:

    1. april 2019 kl. 00:06: Re-creating missing index file for duplicati-b75c03ab613984f958c14d9cabd372ce5.dblock.zip.aes
    1. april 2019 kl. 00:05: The operation Backup has started

If I look at the log in verbose mode, it loops through all my files and then stops. The log entries look like this:

    1. april 2019 kl. 00:13: Including path as no filters matched: E:\Filenames
    1. april 2019 kl. 00:12: Including path as no filters matched: E:\Filenames

It stays like this for a while, with no indication of anything happening in any of the logs I can find. But the Duplicati.Server process still uses ~30% CPU.

Any suggestions to where I can continue my debugging?

Thanks,
Erik

1 Like

I have seen this taking a very very very long time. Hold on or move it to a more powerful machine if possible…

Would that be hours or days for a 2TB backup set? :slight_smile:

It depends how many index files are missing. Once you know that and the time it took for once you can start estimating…

Thanks! Fortunately the backup job managed to complete, and it took 3,5 hours to complete for my 2TB backup. It used to take only 30 mins or so.

I was hoping it would be quicker the next time, but unfortunately, it was, it is still running after almost 4 hours. Is there any suggestions to how I can make it quicker again? :blush:

Thanks in advance!

The more versions you have, the longer it takes. Try to reduce by applying a retention policy.

You might also want to consider splitting it in several jobs. Eg. One that has all data but runs once in a month, and one that has only data from 2019 ( example, depends on your data), and runs every day.

Do you happen to know if the GUI says “Verifying backend data” when the detailed log underneath is saying “Re-creating missing index file”? That was my speculation at below, where I also mention a 2.0.4.17 fix you might want, because it might give you fewer missing index files to have to deal with. Might have other ideas.

Verifying Backend Data

Thanks for your input!

I thought it was done with the re-creation of missing index files earlier this week, but I was wrong. It turns out the backup completed only when I disable the pre backup verification. When I turned it off again, it have now been going on re-creating index files for the last 6 days, continuously.

The weird thing, and the reason I believed it to be a hanged process in the first place, is that it gives absolutely no indication that anything is happening for 3,5 hours. Then it completes an index file and then starts on the next. Since I didn’t expect this (it hasn’t happened to me before), I thought when nothing happened for a couple of hours, it was broken. Not so! It just takes 3-4 hours per index.

I have now upgraded to 2.0.4.17 in the hope that it will fix this index-issue. :slight_smile:

I do have a retention policy in place, but I see that I might have to split the backup in several jobs. It’s just very convenient to have it all in one place!

1 Like

Did you ever figure out a way to fix a backup job affected by this? My biggest single backup job (400GB) has been doing this for days, doing exactly what you described above (doing nothing for 2 - 3 hours then uploading a single index file, rinse, repeat). I only just finished setting this backup job up a few weeks ago and would hate to have to re-run it from scratch… i’m willing to do a bit of manual messing-with if needed.

Yes, I ended up letting it run for several days continously (a week or so on my 2TB backup set). As mentioned in my previous post, I also upgraded to 2.0.4.17. If I’ve understood correctly, the issue was caused when a backup is stopped/killed in the middle of the process.

My recommendation would be to give it several days of processing time, and it will eventually finish. Afterwards, my backup takes the regular 10-30 mins again.

I hope this helps! :slight_smile:

1 Like

thanks - decided to let it just run for now.

has anyone ever figured out why each “piece” of this operation requires about 2 1/2 hours to complete? It probably wouldn’t drive me as insane otherwise… @JonMikelV? @kenkendk?

For reference:

It’s now been running for around 11 days in total… i’m pretty sure I could’ve just re-done the entire backup (just over 400GB) in that time :face_with_head_bandage:

Ouch, I’m sorry to hear that! Sorry for the bad advice! :slight_smile: You’re probably right, that it would be quicker to start over!

You might want to try the stuff in this post, though?

1 Like

Excellent - I did what he said and Duplicati blasted through the last 23 missing indexes in about 10 seconds, as opposed to 2 hours apiece - so this probably saved me 2 days of waiting. Sorry for missing that other thread, but thanks for calling it out for me.

1 Like