I’m not sure where this is heading. The main backup may not be OK. The secondary may not be OK because it may pick up a bad main backup (this can be covered by versioning), or get a direct attack.
Orchestration of the two stage backup is an issue. If security were not a concern, you might consider having third party duplicati-client drive two copies of Duplicati, but having Duplicati get remote control exposes it to attacks on its web server (which is not hardened) or just by attacker stealing credentials. Reducing risk could be done at the Duplicati level using its IP controls, but external firewalling is safer.
Another thing you can do is to use –upload-verification-file and utility-scripts/DuplicatiVerify.py to check backup integrity before doing the secondary backup. This might also find non-security integrity issues.
A simple attack would do something like encrypt original source files. Next step up might try to get the backup by getting access credentials from Duplicati (or whatever). If a tool can get there, and attacker attacks the tool, then attacker can get there. Defense must be server-side, e.g. using files immutable normally, and with ability to alter that policy well-protected, and without the credentials on uploader PC.
It really depends on how far you want to go. Ordinary ransomware is probably easiest to stop. Lengthy assault by skilled attackers in the computers are harder, as they will try to move through your systems. Unless your data is very high-value, I’d worry more about the simpler forms of attacks, but it’s your call.
Offline backups may be an option, but for online all I can say is make sure remote access can’t destroy backups. Don’t even give the uploading systems (which may be totally compromised) a way to do that.
Better still would be to restrict the web interface to localhost, ssh to remote, and browse to localhost GUI. Using HTTPS protects against eavesdropping, and gives you some assurance you’re on the correct site, otherwise someone could steal credentials via MITM. HTTPS doesn’t block attacks on the web server or even simple password guessing. Something like SSH is more hardened, and has attack mitigation tools, however mitigating rapid password guessing can sometimes leave you open to denial-of-service attacks. Also note the earlier recommendation to not leave the web server accessible. Best to firewall, if possible. Better still to firewall SSH if possible, and stick to localhost. Depends on how seriously you want security. There have been several forum users who worry about specific crypto algorithms of SSH they find weak. Security can have weak spots, so please keep overall system view in mind, and use layers of protection.
EDIT: “ssh to remote” refers to port forwarding. Basically create an encrypted tunnel to do your browsing.