My backup have been failing for some time now, and today I finally had the time to look into it. It seems to me like it hangs when it is in the process of re-creating a missing index file. Can someone suggest something to test, to make it continue.
I’m now on v.126.96.36.199, but I used to be on the current beta until tonight. I believe I had the same issue.
Destination is Jottacloud.
The log says the following:
april 2019 kl. 00:06: Re-creating missing index file for duplicati-b75c03ab613984f958c14d9cabd372ce5.dblock.zip.aes
april 2019 kl. 00:05: The operation Backup has started
If I look at the log in verbose mode, it loops through all my files and then stops. The log entries look like this:
april 2019 kl. 00:13: Including path as no filters matched: E:\Filenames
april 2019 kl. 00:12: Including path as no filters matched: E:\Filenames
It stays like this for a while, with no indication of anything happening in any of the logs I can find. But the Duplicati.Server process still uses ~30% CPU.
Any suggestions to where I can continue my debugging?
The more versions you have, the longer it takes. Try to reduce by applying a retention policy.
You might also want to consider splitting it in several jobs. Eg. One that has all data but runs once in a month, and one that has only data from 2019 ( example, depends on your data), and runs every day.
Do you happen to know if the GUI says “Verifying backend data” when the detailed log underneath is saying “Re-creating missing index file”? That was my speculation at below, where I also mention a 188.8.131.52 fix you might want, because it might give you fewer missing index files to have to deal with. Might have other ideas.
I thought it was done with the re-creation of missing index files earlier this week, but I was wrong. It turns out the backup completed only when I disable the pre backup verification. When I turned it off again, it have now been going on re-creating index files for the last 6 days, continuously.
The weird thing, and the reason I believed it to be a hanged process in the first place, is that it gives absolutely no indication that anything is happening for 3,5 hours. Then it completes an index file and then starts on the next. Since I didn’t expect this (it hasn’t happened to me before), I thought when nothing happened for a couple of hours, it was broken. Not so! It just takes 3-4 hours per index.
I have now upgraded to 184.108.40.206 in the hope that it will fix this index-issue.
I do have a retention policy in place, but I see that I might have to split the backup in several jobs. It’s just very convenient to have it all in one place!
Did you ever figure out a way to fix a backup job affected by this? My biggest single backup job (400GB) has been doing this for days, doing exactly what you described above (doing nothing for 2 - 3 hours then uploading a single index file, rinse, repeat). I only just finished setting this backup job up a few weeks ago and would hate to have to re-run it from scratch… i’m willing to do a bit of manual messing-with if needed.
Yes, I ended up letting it run for several days continously (a week or so on my 2TB backup set). As mentioned in my previous post, I also upgraded to 220.127.116.11. If I’ve understood correctly, the issue was caused when a backup is stopped/killed in the middle of the process.
My recommendation would be to give it several days of processing time, and it will eventually finish. Afterwards, my backup takes the regular 10-30 mins again.
Excellent - I did what he said and Duplicati blasted through the last 23 missing indexes in about 10 seconds, as opposed to 2 hours apiece - so this probably saved me 2 days of waiting. Sorry for missing that other thread, but thanks for calling it out for me.