Restore dlist files

Hi guys, I’m in a trouble! Accidentally I wipe out all version of db except the latest. I don’t know if dblock are deleted (I used the command line tool - delete command -, so a compact operation should’t be performed but all dlist files (except the latest are wipe out). Fortunately I have a copy of local db before the accident. My question is: it’s possible to recreate the dindex files and check the consistency of backup set?

I’m not so sure, unless you set no-auto-compact:

If a large number of small files are detected during a backup, or wasted space is found after deleting backups, the remote data will be compacted. Use this option to disable such automatic compacting and only compact when running the compact command.

In normal circumstances, Repair button replaces dlist and dindex files that should be there but are not. You’re not in normal circumstances though, and Repair can sometimes do bad things if DB is incorrect.

You can see if your delete made a job log. If so, see if there is Compact information. If not, its detailed Complete log will probably have "BackendStatistics" including "FilesDeleted" to see if count is reasonable for just version deletion or if there are more files deleted than versions, which is a bad sign.

A direct way to see what was done to the destination is <job> → Show log → Remote. If you see other delete besides dlist files, that’s probably compact doing get of old files, leading to put of new ones then delete of the old ones. Because it’s reverse-chronological, start might be on a later viewing page.

I see it and discover that dlist, dindex and dblock files were deleted… at this point I suppose that the good thing is to maintain the current backup and throw away two year of backup… :roll_eyes: fortunately the latest version is still here. Unfortunately I don’t have a backup copy of this backup set.

Ouch.

Some destinations (such as OneDrive) have a Recycle bin concept where deletions linger awhile.
If the files are truly gone, then the delete and compact did as requested although not as desired…

Local NTFS file system :frowning:

Good news!
I found a volume shadow copy dated 19/11/2022; if I restore data from it I loss only two months of backup: not bad =).

So the steps I follow are:

  1. restore the last version of sqlite file (this file is restored from a backup set contained in the shadow copy too).

  2. restore the whole backup set from shadow copy.

  3. swap the local db sqlite-backup related file and and the backup set

  4. try a restore: I should view version untile the 19/02/2022 version

At the end it looks like I don’t perform a backup for two months (from 19/11/2022 to yesterday).

Just a question: In configuration tab of this backup I modify options related to retention of backups and other stuffs … are these information archived also in the gui-related configuraton files or they are stored in sqlite-backup-related database too?

Hello

My very personal opinion is that this kind of fiddling with a complicated software is dangerous stuff. I’d never try to do that if I had deleted part of a backup. Better to start again with a new backup rather to risk to have an unreliable backup that will fail you when you need it.

Do you refer to duplicati or Volume shadow copy?

I refer to Duplicati. You are trying to make a broken setup working again, this is what I mean by ‘fiddling’, while you are merely using shadow copy. If you were you trying to do binary edits on the shadow copies control files, you would fiddle with it. That’s all right if what you want is to learn and you are using a test system, but it does not seem to be your goal.

No, the 19/11/2022 backup was a regular backup: I restore ALL it’s files from snapshot.

I don’t understand what do you mean :-/ of course I restore ALL the backup file set and not only some dblock\dlist\dindex files.

Assuming steps 1 and 2 got a matched job database and destination files, Duplicati should restore OK.
I’m not clear on the words of step 3. If you mean you put back the new database and destination, ditto, however your accident apparently deleted all but latest version, so how did you know backup’s history?

Actually the database has clues, but it’s good to know that you have only a two month gap in restoring. Maintaining continuity with old could in theory mean restoring the latest version of source files, go back with the two month old database and destination, do another backup to bring it up to date after the gap.

This is all in theory and I’m not totally clear on what you did, so if you test, try not to make things worse.

Safer would be to save VSS recovered files as history. When source looks as desired, run fresh backup. Unfortunately, drive might not have enough room for both the old and new destination files. Your choice, however as long as you have that VSS safety net, you can probably revert if problems with plan appear.

Test your restore, and if there have ever been other signs that Duplicati has trouble, that’s an alarm too.
Good practices for well-maintained backups has more, but care taken depends on value of that backup.

If you mean the GUI server database Duplicati-server.sqlite, that’s where the options for the backup are.

Now I have the damage backup set in G:\backup_damaged and the “shadowed backup” in G:\backup_shadowed. At step 3 I rename files as follow:
G:\backup_damaged ->G:\backup_damaged_bad
G:\backup_shadowed → G:\backup_damaged

and for local ditto for local db… I could leap this passage and launch a repair command, buti I remember the “recreation of local db” is a tedious and very long operation.

Sorry, I can’t follow these words.

Can’t follow these either, but don’t ever launch a repair command with mismatched destination and DB, such as a stale database. It can make things “match” again by deleting destination files to fit the record.

It can be pretty fast, or not. If you have one that gets past 70% on progress bar, that’s a sign of damage, however I don’t know if yours is the ordinary far-from-instant or something severe like all-dblock search.

EDIT:

OK, I think I follow better if I remove some extra words, except I guess I got the direction wrong, depending on which area you’re actually running Duplicati in. I thought you were switching to the damaged one (new, but missing versions), trying to get to 19/02/2023. What’s 19/02/2022 from?
There’s a 19/11/2022 mention. It sounds like you’re still preparing to restore old Duplicati source

EDIT 2:

although why would you do that (besides as a test, which is valuable). What’s the end goal here?

EDIT 3:

If you think (worth testing) that you’ve got a perfectly healthy old backup thanks to VSS snapshot
which also got back old database, and if Duplicati Source files are OK (are they?) just do backup.

Ouch! I mean 19/11/2022.

Anyway. I try a “direct restore from backup files” and the recreation of temporary db take about 1 or 2 minutes and I can se version until 19/2/2022. At this point I don’t know what could be the best choice for me:

  1. Continue with current backup, store the “shadowed” backup and use it if I must get a version before 19/11/2022

  2. Delete the current backup, recreate the db from dataset and use this backup with a gap of two months of versions.

In recent history, it sounds like you had an accident losing old versions, and made a backup.
This would be a good time to do a full fresh start, e.g. if source files are over 100 GB or there
is any hint of damage beyond version loss. If you do option 1, might as well start really clean.

I’m assuming that current means the damaged one, and shadowed means one from shadow.

Option 2 puzzles me. It sounded like VSS got you an old database, or was it just destination?
If no old database exists, this would be a good time to make sure you can recreate from VSS.
The progress bar gives a clue (70% to 100% is bad). About → Show log → Live → Verbose
gives the details. If it gets in the 90% to 100% range, it’s doing a slow search through dblocks.

A direct restore can be a little less of a database rebuild because it’s a partial based on needs.

Correct.

When a backup is completed without error I launch another backup job which backup the duplicati db local folder… this backup job have a data retention of 5-6 version. This backup job is saved in the same volume were the backup shadowed is store. So when I open 19/11/2022 snapshot I save a copy of this dataset too… and I was able to restore the local db updated to updated to 19/11/2022.

As a result now I have a local db and related dataset aligned to 19/11/2022.

This is dangerous if you ever use one that’s stale and no longer matches the destination.
If you try to use it, it won’t match. If you try to repair, it might destroy your newer versions.

duplicati-2.0.6.3-2.0.6.3_beta_20210617 just destoryed one month worth of backup #4579

When I clicked “repair database”, duplicati marked all remote backups that it couldn’t see in the LOCAL index for DELETION instead of recognizing that the REMOTE files were MORE RECENT and that the LOCAL index was OUTDATED.

There’s currently no code to stop you, but that’s the warning not to do that.

Aligned is OK. You could still move the database aside and see if recreate works well.
If it’s smooth per previous directions, you can delete the test DB and put original back.

1 Like

I try a db restore… and get an error of missing dblock files :expressionless:

So the best approach is store this data set and use it ONLY in case of emergency and start with a new fresh backup set.

In order to do this: can I simply delete de file in destination path and local db? Or I should perform other operations?

Can you give the actual error? Sometimes it’s not fatal. Did it message any later status, or was that it?

EDIT:

You could also look over its log report.

Lost of 1 dindex files and 2 dblock files. I run affected command, some files are involved and some of these files are photos related me and my schoolmates… there are important memory for me and I prefer don’t risk to lost these memories.