Advice on validating/recovering/recreating database

Summary:
I have a database, but I want to ensure it is valid, or create a new database that is valid.

Background:
Duplicati is backing up to B2.

Mid-February I moved /home from Linux laptop “A” to Linux laptop “B”, and also moved the Duplicati configuration and local database. Backups were stopped on laptop “A”. Laptop “B” was backing up without any known significant issues. On March 23 laptop “B” developed an issue related to a BIOS update and would no longer boot.

I still had a copy of /home from laptop “A” from mid-February, and was able to start with that.

I attempted a delete and recreate database which failed after 1.5-2 days. I believe - but am not sure - that the root cause of the failure was insufficient space in /tmp.

After some other restore/recreate attempts that went nowhere and some searching of articles in this forum, I was able to get the Duplicati server running using /var/tmp, and was able to do a restore of /home using the “direct restore from a saved config” option. Based on articles and a small test restore I tried I was expecting this to create a temporary database for the restore, but it seems to have created a “permanent” database.

Current situation:
So now I have a local database that I was not expecting! I have not attempted any new backups at this point.

What am I asking?
Is there a way to validate the current local database that was created by the “direct restore from saved config”? If not, what would be the “best practice” for recreating the database?

Since your result disagrees with expectations, please clarify result, i.e. “it seems” is too vague.

Expected = a temporary database used for the restore and then automatically deleted. Expectation is based on everything referring to the database as “temporary”.

Result = a 2.7GB database that is present on the disk post-restore. I can issue a “list” command that shows me the filesets in the database. Given how quickly the command competes I assume it is getting the data from the local database rather than the data on b2.

I said “it seems” as I am not sure if this is a full, proper database, or a “temporary” database that did not get deleted for some reason.

Present where? What name? Noticed how?

From GUI’s Commandline? That should use DB on GUI Database screen.

I “think” the “partial temporary” database would have only one version in it.

You can also look at the output. The list command works with or without a DB:

Listing filesets:
0	: 4/1/2025 8:06:22 AM (1 files, 130 bytes)
Return code: 0

or if no DB:

  Listing remote folder ...
  Downloading file duplicati-20250401T120622Z.dlist.zip (665 bytes) ...
Listing filesets:
0	: 4/1/2025 8:06:22 AM
Return code: 0

Rather than quote individual questions, here are copy/pastes from the GUI:

From the main screen of the GUI (backup name not copied):

Last successful backup:
    Mar 22, 2025 2:08 AM (took 00:08:52)
    Run now 
Source:
    334.371 GiB
Backup:
    317.203 GiB / 18 Versions

Result from issuing the “list” command from the GUI:

Running commandline entry
Finished!

            
Listing filesets:
0	: 3/22/2025 2:00:00 AM (469846 files, 334.371 GiB)
1	: 3/20/2025 2:14:52 PM (469700 files, 334.277 GiB)
2	: 3/18/2025 8:40:53 AM (469547 files, 334.238 GiB)
3	: 3/17/2025 2:00:00 AM (469358 files, 334.203 GiB)
4	: 3/10/2025 2:00:00 AM (465265 files, 333.810 GiB)
5	: 3/3/2025 1:00:00 AM (464824 files, 333.763 GiB)
6	: 2/24/2025 1:00:00 AM (459038 files, 332.944 GiB)
7	: 2/17/2025 1:00:00 AM (458411 files, 332.818 GiB)
8	: 1/11/2025 1:00:00 AM (657230 files, 352.805 GiB)
9	: 12/7/2024 1:00:00 AM (683101 files, 345.391 GiB)
10	: 11/1/2024 2:00:00 AM (657444 files, 340.776 GiB)
11	: 9/27/2024 2:00:00 AM (661267 files, 342.722 GiB)
12	: 8/23/2024 2:00:00 AM (665656 files, 342.049 GiB)
13	: 7/19/2024 2:00:00 AM (662543 files, 341.347 GiB)
14	: 6/13/2024 6:55:28 AM (659217 files, 341.106 GiB)
15	: 5/6/2024 6:55:27 PM (659863 files, 362.140 GiB)
16	: 3/31/2024 2:00:01 AM (655284 files, 390.574 GiB)
Return code: 0

[If you need me to run it from the Linux command line let me know]

Database info from the GUI:

Location
Local database path: /root/.config/Duplicati/UMYFGVUULA.sqlite

Database file from Linux command line:

# ls -/root/.config/Duplicati/UMYFGVUULA.sqlitete
-rw-------. 1 root root 2879684608 Apr  1 09:48 /root/.config/Duplicati/UMYFGVUULA.sqlite

[I assume the file has today’s date as a side-effect of the “list” command?]

Please let me know if you need more information.

This doesn’t prove current state since it’s only updated when a backup finishes.

This looks like the job has DB as seen on DB screen. Assuming it’s as cited later.

OK, now try to read the background from the original post, knowing more context.

In order to fail DB Recreate on a space issue, you would have added the job first, which would have assigned a database path, but not immediately made database (without a database, the Delete button would be disabled). Failed out-of-space try probably made an unhealthy database in assigned spot. You can read its job logs, looking at operations and dates to see if it lines up better with that failed Recreate.

While I’m not familiar with the direct restore type you did, a regular one makes only one version IIRC. It cuts back on other work too, to make a “partial temporary” DB.

If you want to get a little riskier than running Job → Show log, there are commands which refuse to run if they see a job DB that didn’t finish a recreate, which is where things seemingly ended (out of space). Maybe the least likely to change anything is list-broken-files. If you like, you can install sqlitebrowser for a manual exam looking for repair-in-progress in (probably) the Option table. I’d check logs first.

The original database recreate that I assume failed due to running out of temp space failed hard - no database was created.

list-broken-files is running, but also showing several thousands. Here is an excerpt:

16	: 3/31/2024 2:00:01 AM	(98304 match(es))
[...]
	 ... and 98299 more, (use --full-result to list all)
15	: 5/6/2024 6:55:27 PM	(47217 match(es))
[...]
	 ... and 47212 more, (use --full-result to list all)
14	: 6/13/2024 6:55:28 AM	(6267 match(es))
[...]
	 ... and 6262 more, (use --full-result to list all)

This certainly makes me feel that this database is not fit for purpose and I should try another full recreate, but I will wait for expert opinion/advice.

There is definitely a greater expert, but lead dev’s time is very tight.

I still think you should look at the job log since you have something.

What it is and how it got there are questions. Logs might give clues.

I assume that means “Restore from configuration”, i.e. an Export that can also be used to Add backup → Import from a file. The Restore does have some bugs in it:

Discrepancy with advanced options in “Restore from configuration” #2605

which was filling my screen (and log file) with bogus complaints about options like:

2025-04-01 08:09:08 -04 - [Warning-Duplicati.Library.Main.Controller-UnsupportedOption]: The supplied option ----no-encryption is not supported and will be ignored

That’s from a log-file. The annoying yellow warnings on-screen were dismissed. There IS a database path in the export, but stored differently, and for me in a small test (and for you in yours), there was no database recreate for a job’s assigned DB.

If the developer gets involved, maybe some explanation is possible. If not, you can probably just try Recreate again. Look at job logs in DB before then, if you ever do.

I’m hesitant to try Repair, as it can change Destination when trying to align with DB.

Yes - “Restore from configuration”. I also tried “Direct restore from backup files” as well at one point. But I am reasonably sure the “restore” that left me with a database was “Restore from configuration”.

Looking over the stored logs, there are errors from backups on the failed laptop where there were missing files (the known issue), and then several “code = Corrupt (11), message = System.Data.SQLite.SQLiteException (0x800007EF): database disk image is malformed
database disk image is malformed” that pre-date the “Restore from configuration”. Finally there is a " Failed while executing Repair “” (id: 991a9160-f16d-4285-b1d4-b36598eb4885)
Duplicati.Library.Interface.FileMissingException: The requested file does not exist " that I also believe pre-dates the “Restore from configuration”.

Thanks for your advice, and I apologize for my lack of specifics/details. It has been a rough week.

To do the recreate, I should do the following?

  1. Move current database and its backup file out of the /root/.config/Duplicati folder
  2. Create a shell with logging so we keep a full log of output
  3. export TEMPDIR=/var/tmp/duplicati/
  4. export TMPDIR=${TEMPDIR}
  5. duplicati-cli repair URL --no-local-blocks --tempdir ${TEMPDIR}

Any other suggested options?

Would there be any value in downloading the contents of the b2 bucket to a local directory and using it for the repair?

Was that the server log at About → Show log? I meant Job → Show log.

image

That will give you the history of the job database you’re wondering about.

I was thinking of GUI Database screen, such as

image

If you move the questionable DB aside, Repair will recreate DB.
If you just want to trash it (and delete the evidence), Recreate it.

You can add Advanced options for log-file, log-file-log-level. Verbose?

--log-file (Path): Log internal information to a file Log information to the file specified.

--log-file-log-level (Enumeration): Log file information level Specify the amount of log information to write into the file specified by the option --log-file. * values: ExplicitOnly, Profiling, Verbose, Retry, Information, DryRun, Warning, Error * default value: Warning

(maybe use Verbose to try to catch more on one run? If not, maybe Information?)

--tempdir (Path): Temporary storage folder Use this option to supply an alternative folder for temporary storage. By default the system default temporary folder is used. Note that also SQLite will put temporary files in this temporary folder.

(and I hope it works as stated. In prior times, one had to set TMPDIR manually)

Has duplicati-cli been used before here? I thought this was GUI. CLI usage might hit database location problems and permission problems once it’s located. Location is also completely independent of GUI, so you have to tell it --dbpath.

--no-local-blocks is for restore. I don’t think it will hurt repair, but it won’t help.

You can probably get it going on duplicati-cli, and there’s a console-log-level option, but I’m not sure there’s a big advantage to doing it that way, but your call.

Maybe some speed. In terms of cost, if it has to download dblock files (unknown), then downloading once saves cost on retry. Otherwise, it might waste downloading.

You can maybe go to the mystery job database Job → Show log → Remote which should show the latest actions at top. I just did a database Recreate, and log says:

so the dindex located everything the dlist wanted, thus no dblock download.

Yes - that was About → etc.

Under jobname → Show log there are 3 entries:

* Apr 1, 2025 10:52 AM - Operation: ListBrokenFiles failed
* Mar 29, 2025 11:04 AM - Operation: Restore
* Mar 29, 2025 10:33 AM - Operation: Restore

The entries for Mar 29 show green, the Apr 1 entry shows red. Both of the Mar 29 restores were for specific files, not the big restore of /home.

Up to this point I have been using the GUI. I was just thinking there might be an advantage to using the CLI instead. Since you think there is little to be gained, I will stick with the GUI.

I am running the server from the Linux shell as root with the following commands:

export TEMPDIR=/var/tmp/duplicati
export TMPDIR=${TEMPDIR}
/usr/bin/duplicati-server --webservice-port=8200 --webservice-interface=any --tmpdir ${TEMPDIR}

I’m using TEMPDIR, TMPDIR, and --tmpdir based on forum posts.

I will save the current database and proceed with a recreate from the GUI.

Thank you for the advice!

Log entries of note so far…

Apr 1, 2025 3:56 PM: Replaced blocks for 15 missing volumes; there are now 15 missing volumes 

Several entries like

Apr 1, 2025 4:00 PM: Found duplicate entry in archive: *data*

where data is various strings that look like base64.

Currently on

Apr 1, 2025 8:22 PM: Pass 1 of 3, processing blocklist volume 65 of 170

EDIT 1 - now on “Pass 3 of 3, processing blocklist volume 793 of 5205”

Seeing several entries like:

Apr 2, 2025 8:57 AM: Unexpected changes caused by block duplicati-b27cd8a92a83a4de7b1a5064d4c447d33.dblock.zip.aes