duplicati-cli restore hangs after a while, no CPU usage or I/O usage is observed at this point and no apparent error messages occur. I’m not sure what to do here.
Version: - 2.1.0.5_stable_2025-03-04
I tried this command (edited out sensitive information)
--jottacloud-threads (Integer): Number of threads for restore operations
Number of threads for restore operations. In some cases the download rate
is limited to 18.5 Mbps per stream. Use multiple threads to increase
throughput.
* default value: 4
is another thing you could try, e.g. change that to 1 to see if it does any better on restore.
There was an attempt there to add what I view as a Duplicati-level safety timeout, called
--read-write-timeout (Timespan): Set the read/write timeout for the
connection
The read/write timeout is the maximum amount of time to wait for any
activity during a transfer. If no activity is detected for this period,
the connection is considered broken and the transfer is aborted. Set to 0s
to disabled
* default value: 10m
which presumably was meant to force its way into retry in the hope retry will fare better.
Does it do any retries or not even start? Unfortunately, there’s no low-level network log.
--log-file=<path> and --log-file-log-level=retry might show some info though, without revealing anything sensitive. The dindex and dblock names are just unique, with nothing sensitive about them. You can redact if you like though.
I’ll give those options a shot. What I have been doing is closing the program with a ‘ctrl+c’ and restarting it every time it occurs. It’s slowly been getting more finished as the number and size of files it has reported as needing to go through has been shrinking.
I do have one other question maybe you or someone can answer. What happens if two files are identical? I was reading in the documentation that it renames files that have the same names but I’m only assuming if both the file name and content are the same it doesn’t redownload the file. Is that true?
Unclear what you mean, but it doesn’t sound like you mean two files at different paths.
The rename comes in when the file version at a path is replaced with different content.
Duplicati.CommandLine help overwrite
--overwrite (Boolean): Overwrite files when restoring
Use this option to overwrite target files when restoring. If this option is
not set, the files will be restored with a timestamp and a number
appended.
* default value: false
The timestamp is only used if file is actually different. It’s not changed if it’s already OK, which is why your unfortunate incremental restore is making progress getting files back.
If “are the same” means compared to the backup version restored, no need to download.
I should say what I’m doing in case there is a better way to go about this. I’m trying to compare all files ever backed up from a directory with the current live set by downloading the directory in question to a spare hard drive.
One version at a time, for all versions, to allow the “ever backed up” comparison, keeping track of which version is at what time (because I “think” the timestamp is time of restore)?
Sounds pretty painful. Do you need all the file versions actually restored for this analysis?
I don’t know what exactly “compare” means. If you need to open files, you need the files…
Shortcuts are possible if you need less, e.g. comparing two versions to see what’s added, deleted, or modified has the compare command, but it’s not going to detail exact changes.
I may have deleted files and I do not remember every file name. I will know better if I could get a list of what is missing or changed. My initial idea was to redownload everything and then compare what files are different and if I still actually needed those files by seeing what they were.
and then it just ends there. It’s 6am local time May 20th as of writing this and that shows it stopped doing anything after 10:08PM May 19th last night. No error or explanations as to why.
and that’s it. Typically (and unfortunately) there’s not a lot of lower-level transfer logging.
Maybe devs will have another idea, or maybe they have Jottacloud but likely fewer files.
For your part, please say what options you use that might be relevant. There’s also this:
Where someone with Jottacloud (you?) could see if problem can be reproduced better.
The history of this is confusing, although possible it’s just inconsistent.
--read-write-timeout (Timespan): Set the read/write timeout for the connection
The read/write timeout is the maximum amount of time to wait for any activity
during a transfer. If no activity is detected for this period, the connection
is considered broken and the transfer is aborted. Set to 0s to disabled
* default value: 10m
was one theory, attractive after you reported --read-write-timeout=0 (disable) fixed it.
The suggestion was that maybe you could make the problem worse with a short timeout, maybe on a small test backup, although I guess hung restore from main one can be OK.
Unfortunately, latest report that --read-write-timeout=0 isn’t the cure weakens theory.
It’s probably worth a try. I don’t have Jottacloud, so don’t know how logs should look at 4.
--jottacloud-threads (Integer): Number of threads for restore operations
Number of threads for restore operations. In some cases the download rate is
limited to 18.5 Mbps per stream. Use multiple threads to increase throughput.
* default value: 4
This whole idea seems unique to Duplicati’s Jottacloud code. Maybe devs can comment, based on a look at the code. Maybe I’m misinterpreting what each thread here is doing…