Recreating database logic/understanding/issue/slow

There’a a per-file hash taken at backup time that’s compared at restore time, so it should be verified fine.

I lost track of things. So if this was a full Recreate rather than a “direct restore”, it “should” show multiple versions (I’m glad it got all the way out of Temp before things went bad). A direct restore is single-version which should in theory be faster, but also is single-use even for that restore, and won’t do more backups.

Continuing to use the database will see it get a bunch of self-checks primarily before and after a backup. Some of these are internal DB consistency checks, while a few will do some checks of the backup files. There are numerically just a few on-demand DB test tools. The REPAIR command isn’t something to run routinely, as it attempts to synchronize the database and remote. This backfires spectacularly if the local was restored from something like an old DB backup, as it finds unknown remote files, thus deletes them. Your database “should” be super-fresh but I’m not 100% sure how sane it is. Protect your backed up files (which might be hard without a copy or folder permission change) and your hard-earned database (copy). The LIST-BROKEN-FILES command is supposed to be safer, as it’s informational, for a sister command.

There may be some risk of hidden problems in older versions of either the database or the remote being discovered when compact runs and gathers various partially-filled files of various ages into new dblocks. Being most conservative would probably start again, but keep the old for history, but that has drawbacks. You can also do test restores of as many old versions as you like, and you could run the test command, which samples (including a large sample of all) the remote files to test their integrity versus DB’s view, which in your case “should” match really well after the recent Recreate. This check also runs slightly on every backup, and can be told to pick a larger test sample by setting –backup-test-samples as desired.

Over time, databases can become less efficient in space usage. See forum discussions about VACUUM.

I found two reports of the Azure problem specifically, and one as a side note. None have been figured out.

Test with –full-remote-verification Throws an Error and Hangs
Backups not running/error out after update to 2.0.4.5_beta_2018-11-28:
Metadata was reported as not changed, but still requires being added?

Yes, the last, somewhat successful run was a full heads on recreate.
I tried to just restore my files, but because i wanted to restore the majority of files out of my latest backup, it started going through all dblocks non the less, so i put the task onto a somewhat stronger machine and did a proper recreate, because that at this point seemed more senseble, than doing the same thing for one time use only.

I mangled my backupfiles somewhat by restoring a database out of the backup itself and running the repair command on it. That seemed to delete the latest two iterations, which where faulty due to bad connection and/or allready wonky machine anyway. Trying to verify this database did still not check out, so started recreating a new one.

Yeah, over my dead body maybe…

That sounds interresting, i will look into it and return if questions occur.
Just from flying over the docs i don’t really get the CLI though.
Could you give me an syntax example for what the “most thorough” test would look like?

Using the Command line tools from within the Graphical User Interface may be the easy and safest path, but I can’t really give syntax because it’s a web page. Using Duplicati from the Command Line is external CLI, and the easiest way to get going there from an existing GUI backup is with Export as Command-line. Either way, you may need to adjust it from backup to your new command. Here’s one I adjusted for a test:

Duplicati.CommandLine.exe test <storage-URL> <samples> [<options>] after trimming a bunch

C:\>"C:\Program Files\Duplicati 2\Duplicati.CommandLine.exe" test "file://C:\Duplicati Backups\local test 1\\" all --backup-name="local test 1" --dbpath="C:\ProgramData\Duplicati\87697679687271856580.sqlite" --encryption-module= --compression-module=zip --dblock-size=50mb --no-encryption=true  --disable-module=console-password-input --full-remote-verification
  Listing remote folder ...
  Downloading file (1.43 KB) ...
  Downloading file (21.38 KB) ...
  Downloading file (21.39 KB) ...
  Downloading file (21.39 KB) ...
  Downloading file (20.72 KB) ...
  Downloading file (1.69 KB) ...
  Downloading file (21.87 KB) ...
  Downloading file (11.64 KB) ...
  Downloading file (608 bytes) ...
  Downloading file (31.86 KB) ...
  Downloading file (1.23 KB) ...
  Downloading file (610 bytes) ...
  Downloading file (48.08 KB) ...
  Downloading file (31.80 KB) ...
  Downloading file (609 bytes) ...
  Downloading file (40.73 KB) ...
  Downloading file (700 bytes) ...
  Downloading file (5.13 KB) ...
  Downloading file (15.14 MB) ...
  Downloading file (16.59 MB) ...
  Downloading file (15.19 MB) ...
  Downloading file (700 bytes) ...
  Downloading file (15.08 MB) ...
  Downloading file (700 bytes) ...
  Downloading file (16.38 MB) ...
Examined 25 files and found no errors

C:\>

I threw in not only all but –full-remote-verification (which sometimes goes off for people, and it’s not clear whether it’s actually a problem or just noise – figuring that out might get into way too much file dissection). Note that this is primarily a check of the remote volumes (dlist, dindex, dblock) that the backup makes, so focus isn’t on whether everything makes sense as a system that could be turned into backup and restore.

If you ever go for a restore test after having rebuilt your system, using –no-local-blocks will turn off speed enhancement of building the restored files out of any currently existing local blocks that might be suitable. This ensures that everything comes from backup files, so is a better test of a lost-drive disaster recovery.

Now that i have most of my data back and a database that seems to be sufficient for restoring stuff, i tried reinstating it for continuing backuping.
But i get the following message, regardless if I try to do a repair first:

Does that mean that because Duplicati didn’t finalize the recreated backup i would honestly have to do it again?

In the copy-the-database-early experiment that at least we didn’t have to do from Temp, the second step was to clear that flag then test and hope for the best, on the theory that it wasn’t really making progress.

The somewhat more normal way to clear repair-in-progress looks like it’s list-broken-files (hopefully you don’t have any) then purge-broken-files from your choice of GUI Commandline or true commandline based on an Export as Command-line. In either case, adjust syntax from backup to desired command. Please make a copy of your hard-earned database before you try getting it bent back to good operation.

Getting more daring, we could also open the database and remove the status, but let’s not go there first.

There’s a third option that’s even more leading-edge, basically doing field trial of the developing issue fix.

I did do, with that 99,9941468% complete database, a list-broken-files with full-result.
It found 219 matches, 218 entries of which belong to temporary files of my AntiVir - whatever -, and the last one beeing the old database from back when the world was still a better place. But that ship has sailed anyway.

I suppose i should now do purge-broken-files?
That would mess around in my backup though wouldn’t it?

I will wait for a positive/negative and furhter advice before i do that…

Edit:
It did discover the --dry-run option and tried it, this was the outcome:

What can I learn from this?

list-broken-files shows life is not as wonderful as we might have liked in the recreated database, however it seems that the file list is well-understood (not sure what “whatever” refers to though). purge-broken-files is behaving worse than I expected. I can’t find any reports where it winds up at that “No transaction is active”.

Both operations run fine for me on 2.0.4.5 using GUI Commandline. Was yours GUI or Command Prompt? Command Prompt is totally separate, and it was never totally clear to me if the GUI finished its operations. Possibly if it’s still in the database (Process Explorer can show). The CLI might be bumping into its usage, however I’m not an expert in these SQLite errors. Stopping Duplicati server will ensure no collision occurs.

Holding off the actual purge-broken-files until I look over the code (or someone else comes by and explains it) seems like a good move. Meanwhile, you can still start thinking about clearing that flag via database edit. DB Browser for SQLite can first be tried on an Open Database Read Only of the database (back it up first). Probably it won’t open, because Duplicati on Windows obfuscates the databases to resist malware scans. To have Duplicati unobfuscate databases, stop tray icon or server then start using --unencrypted-database. Open the Configuration table and see if you can see repair-in-progress there. I “think” that’s where it is.

I’m sorry, it was my attempt at slang to show utter disrespect for these poor temporary files :wink:
English is not my first language…

Gui Commmandline. The result is the same (–purge-broken files with --dry-run and --full-results) from the machine that died in the first place, where windows is freshly deployed and the duplicati installation is native 2.0.4.5

I will look into it.

Edit:

No, the machine has since been restarted, and did not touch the datbase for 24 hours when i stopped.
I have opened a copy of the fetched database and it does not to seem obfuscated.
I looked through my options ( I had never seen a SQL Database from the inside before), and under “Search Data”, table–>Configuration i find the entry repair-in-progress: true.

Would that be what i would have to change that to flase or are there other things i would have to make sure beforehand?

Maybe this is a hint, but I have redone my db again, this time database and temp-files into a ram-disk.
It improved the expierience from aproximately 10-12 days to 6-7 days. It did get stuck again though on the last 0.000xxx percent.

Also:
I have run purge broken files on the actual backup, and deleted the repair-in-progress stamp from the database, but when i try to contiue using it I get the following error:

These are reported sometimes, both as forum reports and also as the usual way my main backup breaks. This self-check is well-understood in terms of its testing, but how that inconsistency happens isn’t known completely. Some cases might have been improved, e.g. “Unexpected difference in fileset” errors [2.0.3.9] might have been helped by fixes for Failed put operations do not result in error #3673 that went into canary.

Unexpected difference in fileset version 4: …found 6180 entries, but expected 6181 gives the workaround that sometimes works, where the offending version is deleted, but sometimes the problem shows up in a different version. Also, some people might not like to delete versions, because they don’t want to lose any. Copying off your backup files and database would make sure you don’t go down the path of many deletes.

I don’t think it’s been tested extensively, but another option (at the price of another Recreate) is instead of doing a full Recreate (actually a Repair without a database), limit its versions to exclude the troubled one. –version looks like it has enough syntax to it that you could do a range of 0-38,40-999999 to just avoid 39.

I have to say I’m disappointed that this issue can survive a Recreate (if I understood your steps correctly), however beyond that it’s probably a separate topic, and not related to the original slowness of a Recreate.

2.0.4.18-2.0.4.18_canary_2019-05-12 fixes at least my #3747 Issue where an empty file makes Recreate download all of the dblocks. Test, if you like, on a separate install. Canary is bleeding-edge and can break.

  • Ignoring empty remote files on restore, which speeds up recovery, thanks @pectojin
1 Like

Just found this thread after attempting to perform a restore from Google Drive of a backup of about 2mln files (~400GB) I created the other day on a remote Linux machine.

I’m also doing a test restore from said machine to itself, and while it did take a few minutes to get started, it’s been restoring and writing files for several hours now and is now at 221GB.

However, on my desktop PC, a test restore from this Drive has been stuck “Recreating database…” for hours now. From the logs, I can see it’s doing dblock downloads, then processing and lots of waiting, then more dblock downloads. At this rate, it won’t finish for days.

I’m running Duplicati 2.0.4.23_beta_2019-07-14.

Aug 5, 2019 2:34 PM: RemoteOperationGet took 0:00:00:06.435
Aug 5, 2019 2:34 PM: Backend event: Get - Completed: duplicati-b11ed9ccca6db4c968b5acc86c753bfd6.dblock.zip.aes (249.90 MB)
Aug 5, 2019 2:34 PM: Downloaded and decrypted 249.90 MB in 00:00:06.4350832, 38.83 MB/s

...

* Aug 5, 2019 2:17 PM: RemoteOperationGet took 0:00:00:04.828
* Aug 5, 2019 2:17 PM: Backend event: Get - Completed: duplicati-b823e5f3c041940a787339677a97076e1.dblock.zip.aes (249.90 MB)
* Aug 5, 2019 2:17 PM: Downloaded and decrypted 249.90 MB in 00:00:04.8280622, 51.76 MB/s

etc. with roughly 15-20 minutes of SQL and whatever else it’s doing (the raw SQL execution times based on the log output don’t seem to add up to this much time) in between.

At this point, not a single file has been restored, as far as I understand, it’s not downloading these dblocks for the purpose of restoring data, but to recreate the db. Why is it not downloading dindex o dlist files? Why is it downloading all these dblock files instead?

Here’s the last part of the raw log from Profiling debug level (whatever Duplicati is showing currently under About -> Log data): https://gist.github.com/751e1d3d197487a44f33b1c3c84d44b7.

I’ve been pretty impressed with Duplicati handling large data until I tried a restore that didn’t originate from the machine that performed the backup (along with a separate issue of a pretty slow restore dir listing).

Related:

In my opinion, this is the Achilles Heel of Duplicati - if it takes days to restore in the event of a data loss, it’s not a viable backup solution.

So is the temporary best solution to also back up the Duplicati databases themselves and then first put them in place before restoring from another machine?

Specific cases of “why is it downloading” can’t be explained without a DB bug report and much work but roughly estimating whether it’s in the 70%-80%, 80%-90%, or 90%-100% pass will give some insight on why it had to be done. The question of how the need for your dblock downloads began is hugely deeper.

You can look at history of the issue in this topic by pressing Control-F and searching for the word empty.

Searching for the word passes will find discussion of the three levels of dblock search, if any is needed.

To cover it further, v2.0.4.18-2.0.4.18_canary_2019-05-12 has a fix announced in the forum as following:

  • Ignoring empty remote files on restore, which speeds up recovery, thanks @pectojin

Empty source file can make Recreate download all dblock files fruitlessly with huge delay #3747
is the GitHub issue on this, and probably describes the experience of many (one usually has empty files).

Check block size on recreate #3758 is the fix for this probably widespread but very specific problem case.

The fix is unfortunately not in a beta, but is in beta candidate v2.0.4.21-2.0.4.21_experimental_2019-06-28 which was not suitable for a beta due to an FTP problem. Instead, Release: 2.0.4.23 (beta) 2019-07-14 is basically 2.0.4.5 plus a warning that had to be done. Click that release announcement for more about that.

It would probably be worth trying 2.0.4.21 experimental on the separate machine for the restore test, but if installed on an existing backup system, it will upgrade databases and make it difficult to revert to 2.0.4.23.
Downgrading / reverting to a lower version covers that. Though it’s DB-centric, even systems without DB also have potential downgrade issues from design change. At least databases group issues in few spots.

Backing up the DB in a different job that runs after a source file backup would be a fine safeguard in case fixes so far (such as mentioned) still leave you in dblock downloads (last 10% may be especially lengthy).

Keeping more than one version of DB would be best because sometimes the DB self-checking at start of backups and other operations finds problems introduced in prior backup somewhere, so prior DB backup would have the problem, whereas the one before it might be good, but old DBs also tend to remove newer backup files from the remote if one runs repair – it’s never seen the files – fix for issue is being discussed, and repair/recreate is being entirely and very slowly redesigned anyway, so I don’t know what future holds.

I’d note that some of the backup checking is between the DB and the remote, e.g. is everything still there, with expected content that hasn’t been corrupted on upload or on remote? Things do corrupt sometimes.

Keeping duplicate records has advantages over single-copy records, but it does lead to messages about unexpected differences. It also requires reconstruction of the duplicate records (i.e. the DB) if they’re lost. The flip side of that is that lost remote dlist and dindex files can be recreated from the database’s records.

Local DB is a tradeoff IMO, with pros and cons and (for the time) beta bugs that need to be shaken out…

Another tradeoff IMO is the slicing and tracking that any block-based deduplicating backup has to do, but direct copying of source files (which some people do feel more comfortable with) is just hugely inefficient.

Be super-sure never to have two machines backing up to the same destination, doing repairs, etc. Each will form the remote into what it thinks is right, and they will step all over each other. Direct restore is OK.

For best certainty on test restore from machine to itself, add –no-local-blocks to its backup configuration.

I’m not clear on all the machines being used, but I think Linux backup needs similar UNIX restore system.

You know, I did read about Check block size on recreate by Pectojin · Pull Request #3758 · duplicati/duplicati · GitHub, but I assumed that since I’m using the latest beta with a version higher than when it was merged, that the fix was already in the beta, but I see now I was wrong. I’ll upgrade to the experimental version and try a restore to another machine again.

I’ll also try another suggestion from here and up the internal block size Logic and usecase for remote subfolders.

Yesterday, I imported the profile I was trying to restore into my local Windows desktop and tried a db rebuild. To my surprise, it reached the same 2GB db size in sqlite relatively quickly, however, the backup did not begin at this point, and I saw no physical changes to the db size or the restore (the restored files never started appearing) - instead, the job kept doing something (I think downloading dblocks and very slowly doing some queries) and I ran out of patience and killed it this morning. What was it doing? I’m not sure, but it was definitely taking entirely too long to begin doing anything.

After my frustrations with duplicati, I also installed and compared it to duplicacy cli. While duplicacy web gui was kind of broken, I found that with cli, both backups and restores were robust and fast, so there’s a good benchmark for any future duplicati performance for me.

Be super-sure never to have two machines backing up to the same destination, doing repairs, etc.

Yeah, I didn’t set up any schedules or started doing any backups, only restores. In both cases a partial db restore from a direct restore and a full db rebuild from a configuration restore ended up extremely slow and never finished building the database in a reasonable amount of time.

For best certainty on test restore from machine to itself, add –no-local-blocks to its backup configuration.

And probably –no-local-db as well, right?

I’m not clear on all the machines being used, but I think Linux backup needs similar UNIX restore system.

I think what I found during a smaller test restore of a Linux backup to a Windows machine was that the Linux symlinks didn’t get restored (as opposed to maybe restoring files they point to in place of symlinks). I didn’t expect that part and permissions/ownership to work, so I didn’t look into whether there are special restore flags related to such a Linux -> Windows scenario.

Maybe. I’m not familiar with that one. I see descriptions making it sound like this is like partial DB recreate done for direct restore, which is a good test for disaster recovery. For seeing if restores using local DB is working (i.e. the non-disaster case), you would not want this switch because the local DB then isn’t used.

I might be thinking of someone who wanted to move a file tree to a different OS and continue its backup.
Message in code below looks like restore has tolerance for differences in OS, which is probably good…

I’m back after trying 2.0.4.21 (2.0.4.21_experimental_2019-06-28) as opposed to 2.0.4.23_beta_2019-07-14, blowing up my backups, and setting them up with 10MB internal blocks instead of 100KB.

Now this time, a restore of a remote Linux test backup of 15545 files (21.97 GB) went to my local Windows desktop without a db extremely fast. Granted, this is a test set and the real set of 2mln files and 400GB is still backing up, but the db repair/recreation step alone was taking ages before and now seemed to complete extremely fast: only 30 seconds.

The db itself is also a lot smaller thanks to a larger internal block size, which helps the speed tremendously.

I’ll be back with the results of the full restore when it completes, but things are looking a lot more promising with the fixes in experimental and this larger block size, at least in my case.

Now to get these fixes to beta and stable…

The large backup finally finished, and when I tried to recreate the db locally from the profile, this time it took only 25 minutes with the experimental version, whereas the beta was taking so long, I didn’t think it’d finish after a day.

I will try restoring both locally and to the same server next.

Did you notice if it was downloading large numbers of dblocks (either from logs or looking at progress bar?

Possibly the fix for the presumably common empty-source-file bug was enough (but it won’t always be…).

I can confirm the improvement in database rebuild performance. Using Beta 2.0.4.23 on Windows the rebuild job was aborted after more than 8 hours. Exact same rebuild using Canary 2.0.4.28 took 29 minutes. Job was a cloud backup of just under 100GB of files using 100MB block size.

1 Like

I’ve got similar experiences about recreate and even the worst part of it, which is failure to recreate. But that story is in the different thread, but highly correlated with everything in this thread.

Personally I see that the only major fallback with current Duplicati. Slow rebuild would be acceptable in disaster recovery situations. But failure shouldn’t be an option.

Everything else is already working well enough that I’m happy about it. But this is the thing which make me sweat at night. Because I could just find any backup to be un-restorable at random after extremely slow database restore process. But I did see from some other thread that the rebuild / recreate task is being improved, which hopefully solves this issue.

1 Like