That’s what I want on a mobile, but I appreciate I cannot have it now. If my Backblaze problem cannot be sorted I’ll reconsider the Raspberry Pie option.
[edit] Backblaze have said “Sometimes the web interface can take up to 48 hours to reflect your most recent changes” which is ridiculous and useless. I asked how I could find and check the logs of files backed up, and they ignored the question. I think they assume all users are idiots. I’m highly unamused.
Yes, that’s the point of DDNS: it keeps track of your external IP address as it changes.
In order to use within the NAT you then need to use the port forwarding in your router to send external accesses on your nominated port to the correct internal IP address.
Obviously if you have a router that doesn’t support this, or it’s your employer where you wouldn’t have router access, you can’t, but most domestic networks support this.
Just a small note here: Some ISP’s will NAT your router into a pool behind your “public ip” with a bunch of other customer routers. This means that your routers NAT rules actually do not apply because they hit the ISP’s NAT first.
My ISP does this, but you basically just have to call them and they’ll switch you to a public IP that’s mapped 1-to-1
In case anyone is interested, below from Backblaze support regarding their “continuous” backup mode which is a big surprise:
“… once [the initial backup is complete and] the queue is actually cleared … it will try to backup any files that were previously skipped. Once that is done, it will move to steady state. Once in steady state, while set on the Continuous schedule, Backblaze will scan your computer and backup new/changed files under 30 MB every 2-3 hours, and new/changed files over 30 MB every 48 hours”
Not what I would call “continuous” and it will not give me the versioning I require for currently being worked on files - I have Excel set to autosave every 10 minutes and this with Crashplan has saved me on quite a few occassons over the years.
So “hybrid” it will have to be for me. Duplicati and Crashplan until the end of my first year on the small business plan as I get a substantial discount; and then Duplicati and probably Backblaze.
Some surprises are documented (some might say disclaimed) at Help Desk > Backblaze Personal Backup > How It Works and it partly reflects a philosophical difference. Backblaze does a leisurely backup, even more so for large files. Duplicati backs up as fast as it can at the scheduled time. Total disk loss can be a pain for Duplicati due to need for recreating the database before one gets totally back up. Internet speed is a factor. Backblaze can restore by mail. Different backups have different strengths. Hybrid may help, plus it’s safer…
CrashPlan is good at finding changes fast via change notifications, and Duplicati has added this (in canary):
C:\Program Files\Duplicati 2>duplicati.commandline.exe help usn-policy
--usn-policy (Enumeration): Controls the use of NTFS Update Sequence Numbers
This setting controls the usage of NTFS USN numbers, which allows
Duplicati to obtain a list of files and folders much faster. If this is
set to "off", Duplicati will not attempt to use USN. Setting this to
"auto" makes Duplicati attempt to use USN, and fail silently if that was
not allowed or supported. A setting of "on" will also make Duplicati
attempt to use USN, but will produce a warning message in the log if it
fails. Setting it to "required" will make Duplicati abort the backup if
the USN usage fails. This feature is only supported on Windows and
requires administrative privileges.
* values: Auto, On, Off, Required
* default value: off
C:\Program Files\Duplicati 2>
If one winds up backing up some area intensively, one can also set a custom retention rule to thin things out.
Backblaze on its own is a poor substitute for Crashplan IMO I hope not too many people are bored by these mainly non-Duplicati issues.
I’ve set the USN-policy to ‘on’ for now, and will revert to auto once it is working, or not - Duplicati warnings tend to drive me crazy. Not sure why the default isn’t auto as, at worst, it should just fail gracefully. Many thanks for the pointer
I’m confident that, at the least, I am getting there. And I should benefit from a more robust backup sysyem with multiple failsafes. Luckily, my dataset is relatively small, c100gb.