Recreating database logic/understanding/issue/slow

I did do, with that 99,9941468% complete database, a list-broken-files with full-result.
It found 219 matches, 218 entries of which belong to temporary files of my AntiVir - whatever -, and the last one beeing the old database from back when the world was still a better place. But that ship has sailed anyway.

I suppose i should now do purge-broken-files?
That would mess around in my backup though wouldn’t it?

I will wait for a positive/negative and furhter advice before i do that…

Edit:
It did discover the --dry-run option and tried it, this was the outcome:

What can I learn from this?

list-broken-files shows life is not as wonderful as we might have liked in the recreated database, however it seems that the file list is well-understood (not sure what “whatever” refers to though). purge-broken-files is behaving worse than I expected. I can’t find any reports where it winds up at that “No transaction is active”.

Both operations run fine for me on 2.0.4.5 using GUI Commandline. Was yours GUI or Command Prompt? Command Prompt is totally separate, and it was never totally clear to me if the GUI finished its operations. Possibly if it’s still in the database (Process Explorer can show). The CLI might be bumping into its usage, however I’m not an expert in these SQLite errors. Stopping Duplicati server will ensure no collision occurs.

Holding off the actual purge-broken-files until I look over the code (or someone else comes by and explains it) seems like a good move. Meanwhile, you can still start thinking about clearing that flag via database edit. DB Browser for SQLite can first be tried on an Open Database Read Only of the database (back it up first). Probably it won’t open, because Duplicati on Windows obfuscates the databases to resist malware scans. To have Duplicati unobfuscate databases, stop tray icon or server then start using --unencrypted-database. Open the Configuration table and see if you can see repair-in-progress there. I “think” that’s where it is.

I’m sorry, it was my attempt at slang to show utter disrespect for these poor temporary files :wink:
English is not my first language…

Gui Commmandline. The result is the same (–purge-broken files with --dry-run and --full-results) from the machine that died in the first place, where windows is freshly deployed and the duplicati installation is native 2.0.4.5

I will look into it.

Edit:

No, the machine has since been restarted, and did not touch the datbase for 24 hours when i stopped.
I have opened a copy of the fetched database and it does not to seem obfuscated.
I looked through my options ( I had never seen a SQL Database from the inside before), and under “Search Data”, table–>Configuration i find the entry repair-in-progress: true.

Would that be what i would have to change that to flase or are there other things i would have to make sure beforehand?

Maybe this is a hint, but I have redone my db again, this time database and temp-files into a ram-disk.
It improved the expierience from aproximately 10-12 days to 6-7 days. It did get stuck again though on the last 0.000xxx percent.

Also:
I have run purge broken files on the actual backup, and deleted the repair-in-progress stamp from the database, but when i try to contiue using it I get the following error:

These are reported sometimes, both as forum reports and also as the usual way my main backup breaks. This self-check is well-understood in terms of its testing, but how that inconsistency happens isn’t known completely. Some cases might have been improved, e.g. “Unexpected difference in fileset” errors [2.0.3.9] might have been helped by fixes for Failed put operations do not result in error #3673 that went into canary.

Unexpected difference in fileset version 4: …found 6180 entries, but expected 6181 gives the workaround that sometimes works, where the offending version is deleted, but sometimes the problem shows up in a different version. Also, some people might not like to delete versions, because they don’t want to lose any. Copying off your backup files and database would make sure you don’t go down the path of many deletes.

I don’t think it’s been tested extensively, but another option (at the price of another Recreate) is instead of doing a full Recreate (actually a Repair without a database), limit its versions to exclude the troubled one. –version looks like it has enough syntax to it that you could do a range of 0-38,40-999999 to just avoid 39.

I have to say I’m disappointed that this issue can survive a Recreate (if I understood your steps correctly), however beyond that it’s probably a separate topic, and not related to the original slowness of a Recreate.

2.0.4.18-2.0.4.18_canary_2019-05-12 fixes at least my #3747 Issue where an empty file makes Recreate download all of the dblocks. Test, if you like, on a separate install. Canary is bleeding-edge and can break.

  • Ignoring empty remote files on restore, which speeds up recovery, thanks @pectojin
1 Like

Just found this thread after attempting to perform a restore from Google Drive of a backup of about 2mln files (~400GB) I created the other day on a remote Linux machine.

I’m also doing a test restore from said machine to itself, and while it did take a few minutes to get started, it’s been restoring and writing files for several hours now and is now at 221GB.

However, on my desktop PC, a test restore from this Drive has been stuck “Recreating database…” for hours now. From the logs, I can see it’s doing dblock downloads, then processing and lots of waiting, then more dblock downloads. At this rate, it won’t finish for days.

I’m running Duplicati 2.0.4.23_beta_2019-07-14.

Aug 5, 2019 2:34 PM: RemoteOperationGet took 0:00:00:06.435
Aug 5, 2019 2:34 PM: Backend event: Get - Completed: duplicati-b11ed9ccca6db4c968b5acc86c753bfd6.dblock.zip.aes (249.90 MB)
Aug 5, 2019 2:34 PM: Downloaded and decrypted 249.90 MB in 00:00:06.4350832, 38.83 MB/s

...

* Aug 5, 2019 2:17 PM: RemoteOperationGet took 0:00:00:04.828
* Aug 5, 2019 2:17 PM: Backend event: Get - Completed: duplicati-b823e5f3c041940a787339677a97076e1.dblock.zip.aes (249.90 MB)
* Aug 5, 2019 2:17 PM: Downloaded and decrypted 249.90 MB in 00:00:04.8280622, 51.76 MB/s

etc. with roughly 15-20 minutes of SQL and whatever else it’s doing (the raw SQL execution times based on the log output don’t seem to add up to this much time) in between.

At this point, not a single file has been restored, as far as I understand, it’s not downloading these dblocks for the purpose of restoring data, but to recreate the db. Why is it not downloading dindex o dlist files? Why is it downloading all these dblock files instead?

Here’s the last part of the raw log from Profiling debug level (whatever Duplicati is showing currently under About -> Log data): https://gist.github.com/751e1d3d197487a44f33b1c3c84d44b7.

I’ve been pretty impressed with Duplicati handling large data until I tried a restore that didn’t originate from the machine that performed the backup (along with a separate issue of a pretty slow restore dir listing).

Related:

In my opinion, this is the Achilles Heel of Duplicati - if it takes days to restore in the event of a data loss, it’s not a viable backup solution.

So is the temporary best solution to also back up the Duplicati databases themselves and then first put them in place before restoring from another machine?

Specific cases of “why is it downloading” can’t be explained without a DB bug report and much work but roughly estimating whether it’s in the 70%-80%, 80%-90%, or 90%-100% pass will give some insight on why it had to be done. The question of how the need for your dblock downloads began is hugely deeper.

You can look at history of the issue in this topic by pressing Control-F and searching for the word empty.

Searching for the word passes will find discussion of the three levels of dblock search, if any is needed.

To cover it further, v2.0.4.18-2.0.4.18_canary_2019-05-12 has a fix announced in the forum as following:

  • Ignoring empty remote files on restore, which speeds up recovery, thanks @pectojin

Empty source file can make Recreate download all dblock files fruitlessly with huge delay #3747
is the GitHub issue on this, and probably describes the experience of many (one usually has empty files).

Check block size on recreate #3758 is the fix for this probably widespread but very specific problem case.

The fix is unfortunately not in a beta, but is in beta candidate v2.0.4.21-2.0.4.21_experimental_2019-06-28 which was not suitable for a beta due to an FTP problem. Instead, Release: 2.0.4.23 (beta) 2019-07-14 is basically 2.0.4.5 plus a warning that had to be done. Click that release announcement for more about that.

It would probably be worth trying 2.0.4.21 experimental on the separate machine for the restore test, but if installed on an existing backup system, it will upgrade databases and make it difficult to revert to 2.0.4.23.
Downgrading / reverting to a lower version covers that. Though it’s DB-centric, even systems without DB also have potential downgrade issues from design change. At least databases group issues in few spots.

Backing up the DB in a different job that runs after a source file backup would be a fine safeguard in case fixes so far (such as mentioned) still leave you in dblock downloads (last 10% may be especially lengthy).

Keeping more than one version of DB would be best because sometimes the DB self-checking at start of backups and other operations finds problems introduced in prior backup somewhere, so prior DB backup would have the problem, whereas the one before it might be good, but old DBs also tend to remove newer backup files from the remote if one runs repair – it’s never seen the files – fix for issue is being discussed, and repair/recreate is being entirely and very slowly redesigned anyway, so I don’t know what future holds.

I’d note that some of the backup checking is between the DB and the remote, e.g. is everything still there, with expected content that hasn’t been corrupted on upload or on remote? Things do corrupt sometimes.

Keeping duplicate records has advantages over single-copy records, but it does lead to messages about unexpected differences. It also requires reconstruction of the duplicate records (i.e. the DB) if they’re lost. The flip side of that is that lost remote dlist and dindex files can be recreated from the database’s records.

Local DB is a tradeoff IMO, with pros and cons and (for the time) beta bugs that need to be shaken out…

Another tradeoff IMO is the slicing and tracking that any block-based deduplicating backup has to do, but direct copying of source files (which some people do feel more comfortable with) is just hugely inefficient.

Be super-sure never to have two machines backing up to the same destination, doing repairs, etc. Each will form the remote into what it thinks is right, and they will step all over each other. Direct restore is OK.

For best certainty on test restore from machine to itself, add –no-local-blocks to its backup configuration.

I’m not clear on all the machines being used, but I think Linux backup needs similar UNIX restore system.

You know, I did read about Check block size on recreate by Pectojin · Pull Request #3758 · duplicati/duplicati · GitHub, but I assumed that since I’m using the latest beta with a version higher than when it was merged, that the fix was already in the beta, but I see now I was wrong. I’ll upgrade to the experimental version and try a restore to another machine again.

I’ll also try another suggestion from here and up the internal block size Logic and usecase for remote subfolders - #14 by kees-z.

Yesterday, I imported the profile I was trying to restore into my local Windows desktop and tried a db rebuild. To my surprise, it reached the same 2GB db size in sqlite relatively quickly, however, the backup did not begin at this point, and I saw no physical changes to the db size or the restore (the restored files never started appearing) - instead, the job kept doing something (I think downloading dblocks and very slowly doing some queries) and I ran out of patience and killed it this morning. What was it doing? I’m not sure, but it was definitely taking entirely too long to begin doing anything.

After my frustrations with duplicati, I also installed and compared it to duplicacy cli. While duplicacy web gui was kind of broken, I found that with cli, both backups and restores were robust and fast, so there’s a good benchmark for any future duplicati performance for me.

Be super-sure never to have two machines backing up to the same destination, doing repairs, etc.

Yeah, I didn’t set up any schedules or started doing any backups, only restores. In both cases a partial db restore from a direct restore and a full db rebuild from a configuration restore ended up extremely slow and never finished building the database in a reasonable amount of time.

For best certainty on test restore from machine to itself, add –no-local-blocks to its backup configuration.

And probably –no-local-db as well, right?

I’m not clear on all the machines being used, but I think Linux backup needs similar UNIX restore system.

I think what I found during a smaller test restore of a Linux backup to a Windows machine was that the Linux symlinks didn’t get restored (as opposed to maybe restoring files they point to in place of symlinks). I didn’t expect that part and permissions/ownership to work, so I didn’t look into whether there are special restore flags related to such a Linux → Windows scenario.

Maybe. I’m not familiar with that one. I see descriptions making it sound like this is like partial DB recreate done for direct restore, which is a good test for disaster recovery. For seeing if restores using local DB is working (i.e. the non-disaster case), you would not want this switch because the local DB then isn’t used.

I might be thinking of someone who wanted to move a file tree to a different OS and continue its backup.
Message in code below looks like restore has tolerance for differences in OS, which is probably good…

I’m back after trying 2.0.4.21 (2.0.4.21_experimental_2019-06-28) as opposed to 2.0.4.23_beta_2019-07-14, blowing up my backups, and setting them up with 10MB internal blocks instead of 100KB.

Now this time, a restore of a remote Linux test backup of 15545 files (21.97 GB) went to my local Windows desktop without a db extremely fast. Granted, this is a test set and the real set of 2mln files and 400GB is still backing up, but the db repair/recreation step alone was taking ages before and now seemed to complete extremely fast: only 30 seconds.

The db itself is also a lot smaller thanks to a larger internal block size, which helps the speed tremendously.

I’ll be back with the results of the full restore when it completes, but things are looking a lot more promising with the fixes in experimental and this larger block size, at least in my case.

Now to get these fixes to beta and stable…

The large backup finally finished, and when I tried to recreate the db locally from the profile, this time it took only 25 minutes with the experimental version, whereas the beta was taking so long, I didn’t think it’d finish after a day.

I will try restoring both locally and to the same server next.

Did you notice if it was downloading large numbers of dblocks (either from logs or looking at progress bar?

Possibly the fix for the presumably common empty-source-file bug was enough (but it won’t always be…).

I can confirm the improvement in database rebuild performance. Using Beta 2.0.4.23 on Windows the rebuild job was aborted after more than 8 hours. Exact same rebuild using Canary 2.0.4.28 took 29 minutes. Job was a cloud backup of just under 100GB of files using 100MB block size.

2 Likes

I’ve got similar experiences about recreate and even the worst part of it, which is failure to recreate. But that story is in the different thread, but highly correlated with everything in this thread.

Personally I see that the only major fallback with current Duplicati. Slow rebuild would be acceptable in disaster recovery situations. But failure shouldn’t be an option.

Everything else is already working well enough that I’m happy about it. But this is the thing which make me sweat at night. Because I could just find any backup to be un-restorable at random after extremely slow database restore process. But I did see from some other thread that the rebuild / recreate task is being improved, which hopefully solves this issue.

1 Like

I was having the recreate issue as well on ubuntu 18. So, I installed the 2.0.4.34 canary, and, it was able to recreate it. Thanks for whomever first posted this. I had not seen a linux confirmation of this technique yet. Backend is B2.

I had to recreate my database, and was suffering the same issue described above with the version 2.0.4.23_beta_2019-07-14. I have installed 2.0.4.34 canary and the reconstruction of the database of a 175GB backup stored in the cloud has been performed in less than 15 minutes.

2 Likes

My database got corrupted because of “disk full” (my fault). I’ve thrown it away and hit the repair button on the web UI. It’s been running now for ~ 20 hours and about 70% completion is shown on the UI. The backup size is ~ 800 GByte. Is this a normal speed? Duplicati seems to read every file of the backup set as my GBit network adapter shows full congestion to the backup storage on the local network. It’s running within a 4 cpu core debian 10.9 virtual machine and according to htop uses one cpu core 100%.

Can I do anything to speed it up?

Check About → Show Log → Live → Verbose and watch for new events there. Eventually you should see a message along the lines of “processing X of Y”. What are those X and Y numbers? And is it processing dblock files?

My understanding is that older versions of Duplicati may write dindex files incorrectly on the back end, and during a database recreation this is detected. Duplicati is forced to download some/all dblock files in those situations. A potentially very slow process. (Normally if the dlist and dindex files are correct, no dblock files need to be downloaded during a database recreation.)

If you are on the new beta version, there is a way to fix the root problem (incorrectly written dindex files) but unfortunately it requires as functioning database first. So you really do need to let this process finish.

1 Like

I currently see this:

So, yes, it’s reading the dblock files. I’m still waiting for the “Processing X of Y” line to appear. Machine is still quite busy.

Network activity has clamed down.

Still running:
image

I noticed it’s still heavily reading the disk, but the local disk only contains the database files of Duplicati.

Update a while later: The rebuild has now successfully finished after ~ 1 day 16 hours. I did not understand why it first read all files and then only hour-long lasting disk read activity on the drive where the database resides.