Recreating Database for days now

Hi,

I have three backup tasks scheduled, two of them stopped working due to database corruptions. So a “delete and recreate database” was started.

This task is now running for five days; I do not see any change in the length of the progress bar, I see that duplicati is continuously using about 15% of CPU and is doing something on the disk (0,5 MB/s disk access).

The task doesnt seem to finish. Is it really still doing something? Will it finish? The logs are locked, so I cannot see any progress.

Source is 560 GB, backup is 516 GB, 9 versions. Destination is a USB drive which is attached to a network computer (Gigabit Ethernet, 100 MByte/s file transfer rate no problem).

The other backup which did not even start with recreation is slightly smaller (475 GB, 7 versions) and has amazon cloud backup as a target.

What should I do?

If you go into the GUI to “About”, “System Info”, and then search the " Server state properties" window for “Overall progress:0.xxxxx” you can monitor it creeping along.

The last 10% just take forever. Duplicati is decrypting all the files, beserks around in them and creates a new datbase. In your case i would estimate about 4-6 weeks. People here are constantly complaining about it, but the chief developers don’t seem to care. In case of a corruption or database loss people are just expected to swallow that pill. I basically had to suspend all operations for about 3 weeks now because of it.

Topping of this mess is the fact that during this whole recreation duplicate will write dozens if not hundreds of terrabytes (approximately 100gigs per hour), so that when your done recreating it, quickly save your database because the drive will be going soon.

Sadly the developers do not inform you during install that this software was developed to make people miserably…

OK. I have another backup of my data. Its faster to just delete and recreate the whole backup.

I know you are frustrated as are many others, but that is simply not the case. A lot of investigation has gone into this issue from what I’ve seen. Hopefully we will see improvement in this area soon!

1 Like

Have same problem, try to recreate database of a 150000 files/600GB backup.
The sqlite file of 1 GB was recreated in 10 mins. but now duplicat read files in the temp dir.
Why in the temp dir?
If there is need to read all the backupfiles in this case - how should this work in
the case of a cloud backup - would run for ever.

Sorry - this looks not verys reliable !

Check block size on recreate #3758 was the fix five days later. Problem is that Beta is 13 months old. Hopefully this problem will be fixed soon. Stay tuned for an Experimental, which is sort of a pre-Beta.

What version are you on? If you’re willing to install the Canary release (which looks like it’s on its way towards the long-awaited Beta upgrade) on the system, it will possibly avoid a problem if you’re stuck around 70%-100% on the progress bar, and if About → Show log → Live → Information shows that dblock file downloads are constant. I think all downloads pass through the temp dir. The remote files typically are encrypted ZIP files, and some of the processing of opening them to look for blocks uses actual files on the drive. While you might have genuinely missing blocks, the odds seem good you hit:

Empty source file can make Recreate download all dblock files fruitlessly with huge delay #3747

If you mean something was there in 10 mins, and it didn’t change, and you assumed it was complete, what probably actually happened was it went into fruitless-search mode and had nothing more to add. Looking inside the DB with a DB browser would probably show a flag saying it’s still being worked on.

If you have another system, you could try installing Canary there and doing direct restore just to see if the partial temporary database recreate step runs well. You don’t need to actually finish restore unless you want to verify that all goes well if the original system is lost. Personally I’d recommend the current Canary for general use anyway (due to all the fixes), but downgrade to Beta is tough, and I don’t know whether you want to be extremely careful, have old versions you value highly, etc. Many options exist.

If you install Canary, you can consider changing Settings to Beta to stop a new Canary with new bugs. While they’ve been very good lately, they’re unpredictable because that’s where the new code goes in.
Releases can be checked to see if a given one comes with warnings, and what the feedback on it is…

Hi, thanks for you definitly fast reply - I am really impressed.
Tested Canary - and - the DB rebuild was complete after 20 mins. That’s absolutly superb.

Canary also solved my 2nd topic - follow up backu of the initial 600 GB test.
It now need just 6 mins. to complete instead of 20 mins. .
This sounds great.

Thanks to you - and a happy new year to the whole team - you are doing a great job.
helmut

1 Like