When the local database goes missing: How Duplicati rebuilds, verifies, and recovers your backups
Duplicati is built around a simple promise: your data should remain private and secure wherever you store it. That security-first approach shapes almost every internal design choice - including how backups are stored, indexed, and recovered when something goes wrong.
This post explains a less visible but critical part of the system: the local database, why it matters, what happens if you lose it, and how Duplicati can still recover your data even in worst-case disaster scenarios.
Encrypted Volumes: Great for security, Hard for analysis
Duplicati stores your backup data as encrypted (zip) volumes on the destination. This is excellent for confidentiality: cloud providers, storage admins, or anyone who gets access to the destination can’t read your files without your passphrase.
Since the data is packed into similarly sized volumes, it is even hard to reason about the type of data stored, as things like file count or size distribution is hidden.
But encryption comes with a tradeoff: once data is encrypted, you can’t easily inspect or analyze what’s inside without downloading and decrypting the volumes.
That means Duplicati itself can’t simply “look at the storage” and know which files are present or where they live inside the backup sets.
The local database: Duplicati’s map of remote data
To avoid constant downloads, Duplicati maintains a local lookup/index of what exists remotely. This index is stored in an SQLite database and is commonly referred to as the local database.
Think of it as Duplicati’s map - linking:
- file paths
- versions and timestamps
- block lists
- volume references
- metadata needed for restores and incremental backups
Without this map, Duplicati would need to download backup volumes to answer basic questions like “what changed?” or “do I have this block?”
Losing the local database isn’t the end
If you lose your local database (disk failure, accidental deletion, migration mishap, etc.), Duplicati needs to rebuild that map to continue operating normally.
Rebuilding could be expensive… if Duplicati had to download everything.
So instead, Duplicati stores a set of index files on the destination. These files contain exactly the information needed to recreate the database nothing more, nothing less. They have no purpose other than database rebuild.
With valid index files, a full database rebuild requires only those small index volumes, not the entire backup.
What if index files are faulty?
If index files are missing or corrupted, Duplicati doesn’t give up. It will fall back to scanning the actual backup data volumes to extract the information it needs.
That works, but it’s slower because Duplicati must download the larger files that includes the backup contents as well
So index files are optional for normal operation, they are essential for fast recreate of the local database.
Self-Healing in daily use
One of the nice properties of Duplicati is that it doesn’t require the local database to be perfect at all times (but it does need to be perfect for backups to continue).
For some operations, if a part of the database is missing, Duplicati recreates just what it needs automatically. If you are running a restore operation Duplicati will recreate a fragment of the database, using only what is needed to perform that specific restore.
If you want to force a full rebuild yourself, you can do that in the UI under the Database menu.
Worst case: local database lost and remote storage damaged
The worst-case scenario looks like this:
- Your local database is lost .
- Your remote storage is damaged enough that index files and/or critical data volumes are missing or corrupted.
In that case, it may be impossible to recreate the local database, because the information simply no longer exists remotely.
To reduce the chances of this happening unnoticed, Duplicati includes a safety check: after each backup, it tests remote storage integrity by selecting a few random files and verifying that they are intact. The goal is early detection of issues that could prevent a restore later when it is really needed.
When normal restore won’t work: RecoveryTool
If you are in the worst-case scenario, no local database and a damaged destination, regular Duplicati operations can’t function normally.
That’s when you use the recovery tool.
There are two forms:
- a single-file Python script
- the
Duplicati.CommandLine.RecoveryTool/duplicati-recovery-toolprograms included in the installation
The recovery tool does not use a local database and does not read any index files. For that reason, it is not as efficient as a normal restore, but instead it seeks to do a best-effort restore, even if that process results in partial files being restored.
How recovery works internally
Instead of creating structured indexes, the recovery tool:
- reads whatever destination data is available
- builds a text-based index of discovered content
- uses that index as a lookup for restore decisions
So even if your destination is fragmented, say you only have part of the volumes left, the recovery tool can still salvage usable pieces.
In Short
Duplicati’s security model makes remote backups opaque by design, so it relies on a local SQLite database to understand what’s stored remotely. If that database is lost, Duplicati rebuilds it using remote index files, or by scanning real backup data if needed.
And even in extreme disaster cases, where both the local database is gone and the remote storage is damaged, Duplicati still offers a final safety net: the recovery tool, which can restore whatever fragments remain.
So the system has layers:
- Encrypted storage for confidentiality
- Local database for speed
- Remote index files for efficient rebuilds
- Integrity testing for early corruption detection
- Recovery tool for best-effort salvage
Backups are only useful if you can restore them. Duplicati is designed to keep that true - even when multiple things go wrong at once.