Any way to tell if duplicati-cli is actually doing anything?

I’m backing up from a variety of platforms to a Linux box onto an encrypted drive. I’m able to restore files through the web interface from the source computer but in a disaster situation I would, presumably, only have access to the backup drive and not the source. I was trying to experiment to restore locally on the Linux box with the CLI pointed at the location of the backup, but whatever command I try to run prompts for the encryption password and then just sits there for what appears to be an indefinite amount of time. I can’t tell if it’s actually making progress since there is nothing displayed, even with the --verbose option, although CPU usage is high so perhaps it is actually processing. Does it take hours to get a result back? Is there any way to see progress?

For example, I tried this command:

duplicati-cli find file:///media/veracrypt1/carbon '*.txt'

In an attempt to get a list of text files in my backup. I was then prompted for the encryption password which I entered, and nothing more appears to have happened. I also tried a ‘compare’ command with the same result. Am I missing something?

As I understand it there isn’t a lot of detail available from the CLI commands.

You could try using the “Direct restore from backup files” option in the GUI which supports restores to ANY machine even if the original source hardware was gone.

If you didn’t want to install Duplicati somewhere else for the restore, you could use the “zip file download” button from the Dupicati download page, unzip it, and run it in “tray icon” mode to get to the direct restore option - no install required.

My search skills are failing me at the moment, but there’s also a Linux script (on GitHub, I think) somebody has written that can do restores directly from backups as well. Let me know if you’re interested and I’ll try to find it.

Thanks, this is helpful. I’m able to use the “direct restore from backup files” option you suggest above and, unlike the CLI, there is at least a progress bar so you can tell that something is happening. It still takes quite a long time for each step as it seems to recreate the database with each action–so first when building the list of files that are restorable, and then again when actually trying to restore a file. It might take an hour to just restore one file. Perhaps this is inherent in the design of duplicati but couldn’t you keep a mirror copy of the database and indices on the target drive as well as the source, to enable easier restoration when the source drive is lost (not an uncommon situation when restore is needed)?

I would definitely be interested in the Linux script as a “headless restore” could come in handy. I did a quick GitHub search based on your description and couldn’t find anything either.

I’m not sure what’s going on but I can’t seem to find any reference to a stand-alone restore script either. Hopefully @kenkendk will be able to clarify whether it’s real or I just dreamt it.

There is obviously nothing stopping you from creating another backupjob to backup your local database(s) but I don’t know how much time it will save because you will have to do rebuild the database for that backup instead.

You could also upload the database directly and unencrypted (to the same or a different storage), but I don’t know what the security risk is with that.

But you are suggesting that duplicati should automatically tske care of the db backup, right?

Right, my suggestion would be that every backup includes its own database, so if user needs to restore without access to the source system (which is the classic use-case for disaster recovery) it doesn’t take hours to rebuild the database.

If I did manually backup the local database, is there an easy way to load that in on the remote system so that it understands those files are local to it? There may be some basic concept I need to better understand about the duplicati design.

If there is no local database, Duplicati will build a temporary database to be able to list the files, which is what is taking so long.

You can build the local database with:

duplicati-cli repair file:///media/veracrypt1/carbon

And after that finishes, it should be really fast at answering your query.

Not sure why it does not return anything (I assume you had .txt files in there).
What does this return:

duplicati-cli find file:///media/veracrypt1/carbon

And what about:

duplicati-cli find file:///media/veracrypt1/carbon "*"

Edit: I forgot to answer the topic title… You can add --verbose to the commandline to get all sorts of internal progress messages out.

Thanks for the suggestions. I ran the repair command-line and many hours later am still waiting for it to finish. The sqlite file is about 700M so far. Is it normal for it to take many hours to build a local database? The files are stored on a brand new drive and the host machine is a Intel Quad Core CPU Q9300 @ 2.50GHz with 5GB RAM. The duplicati backup directory that it is attempting to build a database for is about 155GB or 6296 files.

One possibility I should mention is the drive itself is volume-encrypted by veracrypt. Might there be something about how duplicati does IO that results in extremely slow processes when accessing a veracrypt drive?

Nearly 24 hours later, it’s still (apparently) processing. I didn’t use the --verbose flag; if I kill the task and restart with --verbose, will I be starting from scratch in rebuilding the database or will it pick up where it left off?

Sadly it will start over - there hasn’t been time yet to make the DB rebuild smart enough to continue after an interruption. :frowning:

Here we are about 40 hours later and it’s still processing. Is that plausible?

Potentially, yes. It all depends on bandwidth, system speed, backup size, etc.

Of course it’s possible something has stalled. Can you tell if the desire file is getting any bigger or if there are occasional bandwidth spikes as files are downloaded from the destination?

/root/.config/Duplicati/IDNNINISSZ.sqlite has been growing by about 100M per day (now up to 962M). Is that the file you are talking about? This is just for rebuilding the database from the CLI.

Yeah, that’s probably the one. If it’s growing then the rebuild is likely working (slowly).

Rebuild performance is a known issue but we don’t yet have any improvements to roll.

As an alternative, is there some way to merge my database on the source system into this database on the target system? The source is a Windows box and I see some sqlite files in AppData\Local\Duplicati, but I don’t want to override the local database on the target Linux box which has the record of the backups from the box as well.

Each job has it’s own .sqlite database file so there isn’t really anything to “merge”.

If you wanted to copy or move a JOB to another machine then yes - you can just export the job and bring the export file and database over to the new machine, import the job, then use the Database menu to point the job to the database you brought over.

However, I don’t think that will work between Windows and Linux/MacOS due to the difference in how paths are stored (drive letters, forward vs. backward slashes, etc.). Well, it would probably “work” but the job would see the contents of the new OS as all new files so there’d be a break in the history.

Sorry - I realize that doesn’t have much to do with the original question… :blush:

Thanks for the explanation. Presumably I could dump the sqlite file to text format and do some regexp work to make it work on the target, but this isn’t scalable. My attempt at repair is still going (up to about 1.3GB now) with about 140 hours of processing. This of course would be a bad situation if I was actually trying to do disaster recovery, or even just try to recover one inadvertently deleted or overwritten file. Is there something else I should be doing here so that I wouldn’t have to wait a week or more to recover a file from backup if I’ve lost my source database?

Backup the source database. Or take at least a copy after your job has finished. I have as the last job one that backs up all other dbs including the configuration db.

This would involve just manually making a copy of the two sqlite files somewhere off of the host system?

Some people have done that, others is a --run-script-after process to automatically compress and FTP or otherwise transfer the database.