I proposed they were maybe not re-uploaded, simply reconnected to data still existing in the backup. Depending on your upload speed, not needing to upload prior data again can be a major time saver.
was asking things like what Retention
option you set or how many versions Restore
menu shows?
Setting this very low might be able to cause re-uploads. Setting it reasonably would fare much better.
You can see exactly how much you uploaded in Complete log
in the job log you believe did 150 GB.
Example:
"BackendStatistics": {
"RemoteCalls": 12,
"BytesUploaded": 7415740,
"BytesDownloaded": 17798535,
"FilesUploaded": 4,
If not there, how did you measure 150 GB upload? Similar question is how you measured 200 GB at
which could be seen either from destination size comparisons using your own methods, or logs again:
"KnownFileSize": 9240718150,
subtract size after the accident from the size before the accident, and see whether it dropped 200 GB.
Such a drop would suggest you keep too few versions, and genuinely lost data from the older version.
The job log of the suspected drop would also show a Compact phase
with a lot of activity for deletion.
This means look for an older version on Restore menu to see if one exists, and if it has the old data. Certainly this would not be as good as if the drive were connected, but it could be last seen version.
There’s a suggestion from post’s double reference to 130 GB
that you think making source visible in Restore
of latest backup means exactly (or roughly) 130 GB
is uploaded. If so, that’s not correct, for reasons I’ve been writing about. Also see Processing similar data in How the backup process works.
From a Features point of view, this is Deduplication. That is also what provides Incremental backups, which is what reduces the storage costs of keeping multiple versions. Still looking for that information because the history post of 1st snapshot
was the deletion. Is there a snapshot older than that one? What’s in it? All that drive’s data as last seen? If older version is now gone, consider keeping more…
Vague is sometimes good, because it allows room for a developer to do something similar when/if a developer ever picks up on this. There are a whole lot more “asks” than developer-volunteers now…
Because the forum is not an issue tracker, to increase chances of this staying visible, file an Issue on whatever you are advocating for, along with exact steps that require as little equipment as possible…
Describing a very tiny test case will let you run log-file=<path> log-file-log-level=verbose to see what messages emerge that might give a potential volunteer some hope that they can locate relevant code.
You will also be asked to look for similar requests in the forum and Issues. That can support the “ask”. Personally this does not ring any bells with me as something asked for a lot, thus priority is an issue…
Additionally, there are numerous workarounds, so I’m not at all hopeful anybody will jump immediately.
It would help to describe what paths and drives you are proposing for this handling. The prior clue was:
So should I think that this is trying to avoid the “I could live with the workaround” and aim for original?
When? As explained, there is an early sanity check of paths you request, but issues can be seen later when the folders are actually walked. One issue on Windows is usn-policy, to avoid walk. It must work.
Channel Pipeline attempts to describe how the process works. Your change needs to fit in somewhere.
I would suggest also keeping non-Windows systems in mind. Only Windows has this sort of drive idea, sometimes – you don’t have to mount on drive letter, if I recall correctly. I think you can do mount point. There is also perhaps the potential for relative symlinks, and a different mklink
for files versus folders. There are directory junctions as well as directory symbolic links. What are current and proposed result? Ultimately we don’t want any inadvertent breakage by failing to think things through before changing…
This gets back to the unknown layout. If this is a broken symbolic link in a folder on a permanent drive, there’s going to be nothing visible to backup or to do anything else with. What’s supposed to be there?
This sounds like you want a backup version with a mix of new data and data that isn’t actually present, which is getting even deeper and stranger. A backup version is a point-in-time view of what was found. Should something disappear by deletion of any sort, go back to the earlier version that had that data…