That’s what restoring the last version has to work out and do. It’s not available separately. Versions are not standalone. Typically many files in the initial backup exist for a long time, kept as individual blocks. Next backup uploads only changes, backreferencing to old data.
Deduplication saves a lot of space compared to a full file copy, but files of the last version have blocks from various earlier versions. One can find its files, but it takes a lot of SQL…
If you want to install DB Browser for SQLite or similar, we can try, but then you’d get a big list of files that you’d have to pass into your download tool. Some scripting might help that.
Given that versions that are similar use little extra space, you can look at source size and destination size to see how they compare. If it’s slow growth, it might not be worth fighting.
Probably need developer interpretation, but are you sure you got message right?
Google has never seen that combination. Was it perhaps “A task was canceled”?
There are a few of those reported, possibly resulting from problems downloading.
Surrounding log context would help. Did the download fail just before that failure?
--number-of-retries (Integer): Number of times to retry a failed transmission
If an upload or download fails, Duplicati will retry a number of times
before failing. Use this to handle unstable network connections better.
* default value: 5
can sometimes ride through temporary failures. Verbose log will show any retries done.
I wouldn’t think so. The port number would be unique, which allows connection tracking.
You might, however, be loading the connection heavily so each computer gets less.
If you’re getting in timeout trouble, some timeouts can be increased, but check logs.
Restore won’t change destination. Don’t run backup, repair, or other things that do.
Note that rclone to local drive would be an even better way to prevent any damage.