Best/Worst back end protocol?

Is a specific back end protocol better (or worse) than any others for Duplicati to back up to at present? AKA, is there a “best practice” protocol to use?

Background for the question:
Currently my servers are Mac, but are likely to transition to linux again. I’m looking into the implementation of Duplicati as our family’s new backup solution (coming from CrashPlan doing multi-site backup instead of cloud backup). I’m mainly considering setting up sftp, minio s3 compatible, or webdav options on the servers after a brief look at Duplicati supporting those, though am open to almost any option I can host myself.

I want to keep things as simple and easy to manage and transition as possible. The backup sets will be large by most home user standards (several TB), and there will be several computers, but the users will be limited to a few family members backing up to a given server at most. Most client systems are Mac, but both Windows and linux clients will also exist (though they generally don’t have files living on them worth backing up at this time).

This might be helpful:

I’ve been using SFTP to a server running unRAID and have been quite pleased with it so far.

1 Like

I saw that post, but it seemed more focused on how easy the back end was, rather than how well Duplicati would like a given protocol. Definitely good to know it has been going without issues on sftp though, as that’s also really easy to set up and admin on any *nix box.

I’d say the most important part of a choosing a backend is picking one you trust that is stable and provides good speeds.

Duplicati uses the same list, put, get, delete commands so, omitting problematic backends, they’re all alike.

1 Like