Any way to tell if duplicati-cli is actually doing anything?

Or an extra backup job in duplicati. I was too laisy to check for another solution…

I’ve still never been able to finish a duplicati-cli repair session, and I can’t tell if it’s ever going to finish. I’m now on the latest canary build (2.0.3.4_canary_2018-04-02). This is on a Linux box; shouldn’t I see a growing .sqlite3 file in ~/.config/Duplicati for the user attempting to do the repair? But I don’t even see a file in that folder that has been updated in the last few hours and lsof doesn’t show any relevant .sqlite3 file open. Is it possible that duplicati-cli is just hung and doing nothing? How would I know? I am running with --verbose, but it doesn’t report anything after asking for (and receiving) the encryption password.

Most of my repair tests have been pretty quick (under an hour), but a reBUILD on the same database took days - and I also noticed the file would go many hours with no size or timestamp change.

My guess is there’s a lot of activity early on as block hashes are being written to the database, but the further into your versions you go you end up with more and more hashes that already exists (thanks to deduplication) so no update is necessary.

That could be – I’m seeing high CPU (now a day later). Should I be looking for progress in ~/.config/Duplicati for the user that executed duplicati-cli, or in /root/.config/Duplicati where the Duplicati service is running as root?

I think you’ll be wanting to check in /root/.config/Duplicati since the CLI mostly just talks to the server (which in your case is running as a root service).

But I doubt you’ll see much in there. If I recall correctly there’s a known design decision related to using the command line where if you didn’t specify a dbpath parameter there’s no job level logging.

You might still be able to see some activity info in the web GUI either under main menu About -> Show log -> Live -> Profiling or main menu About -> System info -> System state properties (scroll down and find the lastPgEvent line).

If this is a thing you’ll be wanting to do across multiple runs then consider using the --log-file, --log-file-log-level, and --log-file-log-filter parameters to have status info written to a log file you could monitor.

Thanks. Would I expect to see two .sqlite files or just one in .config/Duplicati, where the box has its own Duplicati backup session and is also the ssh target for another system? In other words, there are two different systems being backed up onto this box–the local box and a remote box. When I reconstruct the remote box’s database, is it going into the same database of the local box? Can I look in the .sqlite file and see the two different backups if so?

The .sqlite file for a backup lives on the machine where Duplicati is running. So if remote A machine is using local machine B as it’s destination the database file will be on remote machine A.

Duplicati is designed to run in a single place (but still be able to save to lots of different destination types) so there is no central location from which to check on multiple machines all running Duplicati.

However, if that’s something you’re interested in then there are a few external projects people have put together to centralize Duplicati reporting (and maybe even control).

I understand that’s the ordinary scenario, but this whole thread started with an attempt to simulate a disaster recovery. Specifically:

Box A backs up to Box B
Box B also backs up (locally) to Box B

In my disaster scenario, Box A has a catastrophic failure so the original DB is gone. I thought it would be possible to re-create the original DB based on the metadata stored on box B. That’s what I’m trying to do with my duplicati-cli repair command on Box B – I’ve pointed it at the target directory where Box A was previously backing up (via ssh).

Is this not going to work?

Whoops. That’s what I get for not re-catching up on all 4 months of the topic. :blush:

Normally I’d say “sure, just go to the job in the GUI and recreate the database” but in your example the job itself is also gone, right? And just for fun you’re wanting to do this via the CLI, right?

My apologies if you’ve already posted about this - and if I’m still misunderstanding your goal, but have you tried the repair command?

Usage: repair []
Tries to repair the backup. If no local db is found or the db is empty, the db is re-created with data from the storage. If the db is in place but the remote storage is corrupt, the remote storage gets repaired with local data (if available).

I tried renaming a .sqlite file on a test job then running the repair CLI and a new .sqlite file was created.

C:\Program Files\Duplicati 2>Duplicati.CommandLine.exe repair C:\_Backups\Duplicati\TestRestore

Enter encryption passphrase: *
  Listing remote folder ...
Rebuild database started, downloading 27 filelists
  Downloading file (69.71 KB) ...
  Downloading file (121.88 KB) ...
  Downloading file (121.89 KB) ...
  Downloading file (268.19 KB) ...
  Downloading file (268.41 KB) ...
Failed to process file: duplicati-20180308T213015Z.dlist.zip => Unknown header: 2656051151
  Downloading file (268.62 KB) ...
  Downloading file (268.62 KB) ...
  Downloading file (268.62 KB) ...
  Downloading file (268.61 KB) ...
  Downloading file (268.62 KB) ...
  Downloading file (268.62 KB) ...
  Downloading file (268.62 KB) ...
  Downloading file (268.61 KB) ...
  Downloading file (268.61 KB) ...
  Downloading file (268.62 KB) ...
  Downloading file (268.61 KB) ...
  Downloading file (268.62 KB) ...
  Downloading file (273.35 KB) ...
  Downloading file (273.34 KB) ...
  Downloading file (273.33 KB) ...
  Downloading file (273.33 KB) ...
  Downloading file (273.35 KB) ...
  Downloading file (273.34 KB) ...
  Downloading file (345.63 KB) ...
  Downloading file (86.50 KB) ...
  Downloading file (86.60 KB) ...
  Downloading file (104.00 KB) ...
Filelists restored, downloading 7 index files
  Downloading file (51.05 KB) ...
  Downloading file (8.22 KB) ...
  Downloading file (73.40 KB) ...
  Downloading file (113.41 KB) ...
  Downloading file (225.42 KB) ...
Failed to process index file: duplicati-i6735436af81a45b5bbd68ef999a63514.dindex.zip => Unknown header: 1038501081
  Downloading file (99.68 KB) ...
  Downloading file (45.52 KB) ...
Processing required 1 blocklist volumes
  Downloading file (34.19 MB) ...
Probing 1 candidate blocklist volumes
  Downloading file (41.12 MB) ...
Recreate completed, verifying the database consistency
Recreate completed, and consistency checks completed, marking database as complete
Update "2.0.3.3_beta_2018-04-02" detected

My backups are stored as follows:

/media/veracrypt/folder1 - the local backup
/media/veracrypt/folder2 - the target for the remote ssh backup

Since there is no database on this system corresponding to /media/veracrypt/folder2, I ran

duplicati-cli --verbose repair file:///media/veracrypt/folder2

It prompted me for the encryption passphrase (which I entered) and since then I’ve seen no activity. I tried this earlier for a few weeks and it never finished. After upgrading to recent versions of duplicati on all systems, I tried again, starting several days ago, and I can’t tell if it’s making progress. So I’m trying to see if I could look at an sqlite file to determine.

Just in case it might be an OS issue (works on Windows but not Linux) I fired up an old VM (Ubuntu based LinuxLite 3.8 running Duplicati 2.0.2.1 beta daemon as root) and repeated my test.

Even though it all looks like it worked, it did NOT end up making a new .sqlite file like it did on Windows. But it did show progress information, so it’s still not the same scenario you are seeing.

duplicati-cli --verbose repair /home/linuxlite/Duplicati/CLI\ Test
Input command: repair
Input arguments: 
	/home/linuxlite/Duplicati/CLI Test

Input options: 
verbose: 

Invalid type Microsoft.WindowsAzure.Storage.Blob.BlobEncryptionPolicy for instance field Microsoft.WindowsAzure.Storage.Blob.BlobRequestOptions:<EncryptionPolicy>k__BackingField
Invalid type Microsoft.WindowsAzure.Storage.Queue.QueueEncryptionPolicy for instance field Microsoft.WindowsAzure.Storage.Queue.QueueRequestOptions:<EncryptionPolicy>k__BackingField
Invalid type Microsoft.WindowsAzure.Storage.Table.TableEncryptionPolicy for instance field Microsoft.WindowsAzure.Storage.Table.TableRequestOptions:<EncryptionPolicy>k__BackingField

Enter encryption passphrase: 
  Listing remote folder ...
Rebuild database started, downloading 1 filelists
  Downloading file (845 bytes) ...
Filelists restored, downloading 1 index files
  Downloading file (925 bytes) ...
Recreate completed, verifying the database consistency
Recreate completed, and consistency checks completed, marking database as complete
Update "2.0.3.3_beta_2018-04-02" detected

OK, that’s helpful. I remotely mounted this drive from a Windows box, and now the Duplicati-cli actually shows it’s doing something! So the linux duplicati-cli seems incapable of recovering/repairing/restoring from a Windows backup. Is this a known limitation? It seems at least the CLI should complain if it’s not going to work.

I should note also that the Windows repair session from CLI is giving output, but it is also showing repeat errors. Does this mean it’s actually making progress or is it in some kind of infinite loop? The path/filename removed in the excerpt below is always the same:

Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
  Downloading file (58.01 MB) ...
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
  Downloading file (57.91 MB) ...
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
  Downloading file (57.93 MB) ...
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
  Downloading file (58.00 MB) ...
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
  Downloading file (58.07 MB) ...
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
Failed to process file-entry: c:{PATH REMOVED}{FILENAME REMOVED}.pdf => constraint failed
UNIQUE constraint failed: FilesetEntry.FilesetID, FilesetEntry.FileID
  Downloading file (58.13 MB) ...

Ah - somehow I missed that this was a cross OS situation. I know there has been another issue reported by a user who was trying to do something between OSes - but I think it was a continue on Linux a backup started on Windows (it definitely wasn’t a database repair / recreate).

I’m going to guess that the issue is related to the path separator being different in the two environments so the repair flakes out because it’s running in an environment with a particular path separator but none of the paths in the database have that separator.

Perhaps @kenkendk or @Pectojin can chime in if there’s a parameter to “force” a particular separator or an update that should be done so the separator is taken from the target database rather than the runtime environment (assuming my assumption is correct, that is).

My guess is that it’s working correctly but not handling the expected “oops, database already has that record” error correctly (which would be to ignore it since it’s a potentially expected result).