I use sftp-protocol to store backups on a remote server. When I recover files from the remote location, it takes me a rather long time to navigate to the location where the file/s are stored. Every time I expand a folder, it takes 10 seconds to load the sub-folders. Only then I can check the next sub-folder and wait again for 10 seconds. Finally arrived at the location of interest, I learn that I picked the wrong time… and the scenario starts again.
Since I know the correct path from the original computer, it would be much faster to just paste it into a field, and duplicati opens that location on the remote side. Afterwards, I’d like to browse through time.
Is this something that would be reasonable to implement? Or is there maybe a “workaround”?
The restore browser doesn’t talk to the remote side at all. Duplicati queries your local job specific sqlite database. The speed of the restore browsing depends on your computer performance, the number of files you protect, the data size, the number of versions you are retaining, your deduplication block size, and possibly some other factors.
If you know the path of what you want to restore, you can use the Search box that is above the folder/file browser. It should let you get to your file with a single database search.
Maybe duplicati should offer some guidance on what type of information may be provided in the “Search for files” inputbox (in the “restoring files dialog”). For example: Can I use wildcards like “*.docx”? Locations/Paths (which you answered above)?
Looking very much forward to that time-machine-feature!