Duplicati-cli restore suddenly hangs without any error message

duplicati-cli restore hangs after a while, no CPU usage or I/O usage is observed at this point and no apparent error messages occur. I’m not sure what to do here.

Version: - 2.1.0.5_stable_2025-03-04

I tried this command (edited out sensitive information)

/opt/duplicati/duplicati-cli restore "jottacloud://<AUTH>"  "/mnt/kumo_raid/nextcloud/*" --backup-name="Jottacloud <BACKUP_NAME>" --dbpath=/var/lib/duplicati/.config/Duplicati/<DATABASE>.sqlite --backup-id=DB-1 --encryption-module=aes --passphrase="<PASSPHRASE>" --dblock-size=50mb --compression-module=zip --retention-policy="7D:0s,3M:1D,2Y:1W,99Y:1M" --blocksize=10240KB --number-of-retries=500000 --retry-delay=40s --disable-module=console-password-input --restore-path=/mnt/14tb-crypt/RESTORE_TEST_DUPLICATI/

I increased the number of retries thinking it might help, but it doesn’t. It will suddenly stop at this type of message and not continue.

  Downloading file duplicati-<IDENTIFICATION_BLOCK>.dblock.zip.aes (40.008 MB) ...
  --jottacloud-threads (Integer): Number of threads for restore operations
    Number of threads for restore operations. In some cases the download rate
    is limited to 18.5 Mbps per stream. Use multiple threads to increase
    throughput.
    * default value: 4

is another thing you could try, e.g. change that to 1 to see if it does any better on restore.

There was an attempt there to add what I view as a Duplicati-level safety timeout, called

  --read-write-timeout (Timespan): Set the read/write timeout for the
    connection
    The read/write timeout is the maximum amount of time to wait for any
    activity during a transfer. If no activity is detected for this period,
    the connection is considered broken and the transfer is aborted. Set to 0s
    to disabled
    * default value: 10m

which presumably was meant to force its way into retry in the hope retry will fare better.

Does it do any retries or not even start? Unfortunately, there’s no low-level network log.

--log-file=<path> and --log-file-log-level=retry might show some info though, without revealing anything sensitive. The dindex and dblock names are just unique, with nothing sensitive about them. You can redact if you like though.

I’ll give those options a shot. What I have been doing is closing the program with a ‘ctrl+c’ and restarting it every time it occurs. It’s slowly been getting more finished as the number and size of files it has reported as needing to go through has been shrinking.

I do have one other question maybe you or someone can answer. What happens if two files are identical? I was reading in the documentation that it renames files that have the same names but I’m only assuming if both the file name and content are the same it doesn’t redownload the file. Is that true?

Unclear what you mean, but it doesn’t sound like you mean two files at different paths.

The rename comes in when the file version at a path is replaced with different content.

Duplicati.CommandLine help overwrite
  --overwrite (Boolean): Overwrite files when restoring
    Use this option to overwrite target files when restoring. If this option is
    not set, the files will be restored with a timestamp and a number
    appended.
    * default value: false

The timestamp is only used if file is actually different. It’s not changed if it’s already OK, which is why your unfortunate incremental restore is making progress getting files back.

If “are the same” means compared to the backup version restored, no need to download.

I should say what I’m doing in case there is a better way to go about this. I’m trying to compare all files ever backed up from a directory with the current live set by downloading the directory in question to a spare hard drive.

One version at a time, for all versions, to allow the “ever backed up” comparison, keeping track of which version is at what time (because I “think” the timestamp is time of restore)?

Sounds pretty painful. Do you need all the file versions actually restored for this analysis?

I don’t know what exactly “compare” means. If you need to open files, you need the files…

Shortcuts are possible if you need less, e.g. comparing two versions to see what’s added, deleted, or modified has the compare command, but it’s not going to detail exact changes.

I may have deleted files and I do not remember every file name. I will know better if I could get a list of what is missing or changed. My initial idea was to redownload everything and then compare what files are different and if I still actually needed those files by seeing what they were.

can seemingly be seen with option all-versions and search in GUI. Sample result is:

where I see a merge of A.txt and B.txt that I had briefly. I’d forgotten about files above.

If the “ever backed up” isn’t enough, you can use commandline find to get it listed like:

Listing files and versions:
C:\backup source\A.txt
0	: 5/7/2025 11:01:02 AM  - 
1	: 5/7/2025 11:00:38 AM 1 bytes
2	: 5/5/2025 8:08:40 PM  - 
3	: 4/29/2025 5:54:01 PM  - 
4	: 4/27/2025 4:19:26 PM  - 
5	: 4/27/2025 4:18:28 PM  - 
6	: 4/27/2025 3:41:02 PM  - 
7	: 4/27/2025 3:37:34 PM  - 
8	: 4/26/2025 9:59:02 AM  - 

C:\backup source\B.txt
0	: 5/7/2025 11:01:02 AM 1 bytes
1	: 5/7/2025 11:00:38 AM  - 
2	: 5/5/2025 8:08:40 PM  - 
3	: 4/29/2025 5:54:01 PM  - 
4	: 4/27/2025 4:19:26 PM  - 
5	: 4/27/2025 4:18:28 PM  - 
6	: 4/27/2025 3:41:02 PM  - 
7	: 4/27/2025 3:37:34 PM  - 
8	: 4/26/2025 9:59:02 AM  - 

C:\backup source\short.txt
0	: 5/7/2025 11:01:02 AM 17 bytes
1	: 5/7/2025 11:00:38 AM 17 bytes
2	: 5/5/2025 8:08:40 PM 17 bytes
3	: 4/29/2025 5:54:01 PM 168 bytes
4	: 4/27/2025 4:19:26 PM 168 bytes
5	: 4/27/2025 4:18:28 PM 168 bytes
6	: 4/27/2025 3:41:02 PM 168 bytes
7	: 4/27/2025 3:37:34 PM 168 bytes
8	: 4/26/2025 9:59:02 AM 168 bytes

C:\tmp\datafolder\
0	: 5/7/2025 11:01:02 AM  - 
1	: 5/7/2025 11:00:38 AM  - 
2	: 5/5/2025 8:08:40 PM  - 
3	: 4/29/2025 5:54:01 PM  - 
4	: 4/27/2025 4:19:26 PM  - 
5	: 4/27/2025 4:18:28 PM  - 
6	: 4/27/2025 3:41:02 PM  - 
7	: 4/27/2025 3:37:34 PM  - 
8	: 4/26/2025 9:59:02 AM  - 

C:\tmp\datafolder\dbconfig.json
0	: 5/7/2025 11:01:02 AM  - 
1	: 5/7/2025 11:00:38 AM  - 
2	: 5/5/2025 8:08:40 PM  - 
3	: 4/29/2025 5:54:01 PM  - 
4	: 4/27/2025 4:19:26 PM  - 
5	: 4/27/2025 4:18:28 PM 313 bytes
6	: 4/27/2025 3:41:02 PM  - 
7	: 4/27/2025 3:37:34 PM  - 
8	: 4/26/2025 9:59:02 AM  - 

so you can see when files weren’t or were there, and if there, what the file length was.

Adding --read-write-timeout=0 seems to have allowed the process to complete without errors. Thank you for the hint there.

That seems backwards from what I expected, but maybe the dev can explain.

disables what I would have thought was the safety timeout to kick it into retries.

If Jottacloud is exceedingly uncooperative, maybe 10 minute timeout is too low.

You could look at job log’s Complete log count for Retry Attempts for a clue.

I also posted a log-file option, and About → Show log → Stored might also help.

Regardless, I’m glad it’s better somehow.

I agree that it should be the other way around.

My best guess is that the timeout happens, but the Jottacloud backend does not handle it well and ends up deadlocking instead of retrying?

I don’t have any logs from the failed ones it seems. They never “failed” as I just force quit with “ctrl+c” in the terminal.

This theory could possibly be tested in a disposable test backup with some short timeouts.

I had to rerun it, so I added this. The last few lines is just

2025-05-19 22:08:05 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b32aacc2aa38d4561811758661b2c13f1.dblock.zip.aes (47.014 MB)
2025-05-19 22:08:05 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b332d0b7df4264ff795da720f260f251e.dblock.zip.aes (40.007 MB)

and then it just ends there. It’s 6am local time May 20th as of writing this and that shows it stopped doing anything after 10:08PM May 19th last night. No error or explanations as to why.

Attached is the full log I was able to get from the run
restore_test.log.zip (161.1 KB)

Added on top of what other ones?

Is that in use on this run? Is option present at all?

Was --jottacloud-threads ever tested? Here?

Downloads appear pretty one-at-a-time in the log:

2025-05-19 06:27:15 -07 - [Information-Duplicati.Library.Main.Operation.RestoreHandler-RemoteFileCount]: 68010 remote files are required to restore
2025-05-19 06:27:15 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b000be4c0226c44f6acea20e10d4050f2.dblock.zip.aes (45.599 MB)
2025-05-19 06:32:00 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b000be4c0226c44f6acea20e10d4050f2.dblock.zip.aes (45.599 MB)
2025-05-19 06:32:00 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b0015d001c44a4adbae64568bd0569301.dblock.zip.aes (40.550 MB)
2025-05-19 06:32:11 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b0015d001c44a4adbae64568bd0569301.dblock.zip.aes (40.550 MB)
2025-05-19 06:32:11 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b001d4d0019334731b2aaf34b2573cfd0.dblock.zip.aes (41.243 MB)
2025-05-19 06:32:22 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b001d4d0019334731b2aaf34b2573cfd0.dblock.zip.aes (41.243 MB)
...
2025-05-19 22:07:40 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b3221619f2d3f4e91b1b05acbc0d8b55c.dblock.zip.aes (43.282 MB)
2025-05-19 22:07:53 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b3221619f2d3f4e91b1b05acbc0d8b55c.dblock.zip.aes (43.282 MB)
2025-05-19 22:07:53 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b32aacc2aa38d4561811758661b2c13f1.dblock.zip.aes (47.014 MB)
2025-05-19 22:08:05 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Completed: duplicati-b32aacc2aa38d4561811758661b2c13f1.dblock.zip.aes (47.014 MB)
2025-05-19 22:08:05 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Get - Started: duplicati-b332d0b7df4264ff795da720f260f251e.dblock.zip.aes (40.007 MB)

and that’s it. Typically (and unfortunately) there’s not a lot of lower-level transfer logging.
Maybe devs will have another idea, or maybe they have Jottacloud but likely fewer files.

For your part, please say what options you use that might be relevant. There’s also this:

Where someone with Jottacloud (you?) could see if problem can be reproduced better.

full command

/opt/duplicati/duplicati-cli restore "jottacloud://<CLOUD_FOLDER>/?authid=<AUTH_ID>" "/mnt/kumo_raid/nextcloud/*" --backup-name="Jottacloud <NAME>" --dbpath=/var/lib/duplicati/.config/Duplicati/JKHDJVYTTR.sqlite --encryption-module=aes --passphrase="<PASSPHRASE>" --dblock-size=50mb --compression-module=zip --retention-policy="7D:0s,3M:1D,2Y:1W,99Y:1M" --blocksize=10240KB --number-of-retries=50 --retry-delay=20s --disable-module=console-password-input --read-write-timeout=0 --restore-path=/run/media/root/<UUID_mount>/RESTORE_TEST_DUPLICATI/ --log-file=/root/restore_test.log --log-file-log-level=retry

I’m sorry, should I run a shorter timeout?

I can try --jottacloud-threads=1 now since I need to start the command again.

The history of this is confusing, although possible it’s just inconsistent.

  --read-write-timeout (Timespan): Set the read/write timeout for the connection
    The read/write timeout is the maximum amount of time to wait for any activity
    during a transfer. If no activity is detected for this period, the connection
    is considered broken and the transfer is aborted. Set to 0s to disabled
    * default value: 10m

was one theory, attractive after you reported --read-write-timeout=0 (disable) fixed it.
The suggestion was that maybe you could make the problem worse with a short timeout, maybe on a small test backup, although I guess hung restore from main one can be OK.

Unfortunately, latest report that --read-write-timeout=0 isn’t the cure weakens theory.

It’s probably worth a try. I don’t have Jottacloud, so don’t know how logs should look at 4.

  --jottacloud-threads (Integer): Number of threads for restore operations
    Number of threads for restore operations. In some cases the download rate is
    limited to 18.5 Mbps per stream. Use multiple threads to increase throughput.
    * default value: 4

This whole idea seems unique to Duplicati’s Jottacloud code. Maybe devs can comment, based on a look at the code. Maybe I’m misinterpreting what each thread here is doing…

--jottacloud-threads=1 seems to have caused it to hang faster. It’s already stuck.

Full command,

/opt/duplicati/duplicati-cli restore "jottacloud://<CLOUD_FOLDER>/?authid=<AUTH_ID>" "/mnt/kumo_raid/nextcloud/*" --backup-name="Jottacloud <NAME>" --dbpath=/var/lib/duplicati/.config/Duplicati/JKHDJVYTTR.sqlite --encryption-module=aes --passphrase="<PASSPHRASE>" --dblock-size=50mb --compression-module=zip --retention-policy="7D:0s,3M:1D,2Y:1W,99Y:1M" --blocksize=10240KB --number-of-retries=50 --retry-delay=20s --disable-module=console-password-input --read-write-timeout=0 --restore-path=/run/media/root/<UUID_MOUNT>/RESTORE_TEST_DUPLICATI/ --log-file=/root/restore_test2.log --log-file-log-level=retry --jottacloud-threads=1

and attached log from that session
restore_test2.log.zip (12.5 KB)