Restoring a large DB

Recently I’ve been getting errors with the database. So I closed Duplicati, copied a version of the DB, then started running a database recovery. After a week or so, it was only partially done, so it would seem it would take months to complete which doesn’t work for me.

I do have an older database backup from May 2023. Is there any chance of using this as a base and then perhaps upgrade from there? Or should I just cut my losses and start a brand new backup.

Robert

You can try to grab the test build for next Canary from here
It has a fix for this problem. Note that as for a large database with serious damage, this can’t be fast, but hopefully could be less than 36 hours (that was the time needed for the 2 last persons having this problem here). After rebuild, some cleaning of invalid data could be also necessary (with purge-broken-files for example)

Note that this build was done 2 days ago and these builds expire in 3 days, so don’t delay before downloading it.

I’d not recommend to try that, could lead to data loss on the backend and more time wasted.

Ok great. Would I just install it on top of the current version I have? Is it possible to downgrade in case it doesnt help?

If you are current (2.0.7), yes and yes. If you have an older version, it can depend on how ancient it is.

Thank you so much. 36hrs sounds a lot more promising than months. I’ll give it a go and see where I get.

Yeah, there didn’t seem to be any speed increase in the newer version. I started it just before christmas and it just about finished a few days ago, then we had a power failure and it failed.
Ultimately its my own fault, I felt invincible and so had 3 years worth of daily backups (which I realize now is a little overkill). So its quicker to start the backup from scratch, but this time I have a more thought out retention policy in place which hopefully will help.

Thanks,
Robert

it’s unfortunate that you did not post about that while it happened since now it’s done there is no way to investigate why this not fixed it for you. You did not produce a trace file by any chance ?

Oh I guess it didn’t quite finish. First time I misread it as was finished.

Logs of a run would be best, but did you at least watch progress bar?

Live log at verbose level would also help. Somehow you found status.

I think the speedup is for damaged backups, often 90%+ on progress.
If the speed issue is only from large DB, memory cache can be tuned.

What’s source size? At defaults, Duplicati slows down beyond 100 GB.
blocksize should scale accordingly, but it requires a fresh backup start.

Yeah sorry, the backup failed due to a power failure and rather than try and do it again I decided to start new. Never thought to keep a log as Duplicati itself seem to run fine (just long).

Yeah, so the source is ~2TB of data and I was keeping over 3 years worth of daily backups. I had just left the block size at 50kb.

That might be blending two things. The blocksize default is 100 KB. There’s a request to make it 1 MB.
The dblock-size is for a file of blocks, and usually set via “Remote volume size”. That default is 50 MB.
Choosing sizes in Duplicati covers this, but the Options GUI only highlights volume size, not blocksize.

If you’re starting a new backup, it would be a good chance to scale blocksize up by, say, a factor of 20.

Preview of security and bugfix release Canary 105 was some attempts at helping scale via cache size, however it’s a manual setting of CUSTOMSQLITEOPTIONS_DUPLICATI and isn’t well-documented…

Regardless, the chance for a faster recreate seems gone, and we’re not sure how far yours had gone?