I am a Crashplan user and looking to migrate to a new offering. To provide background, I have about 600GB stored on a NAS and the majority of the data is Raw image files. I currently run Crashplan on a windows PC and manually trigger backups periodically. It works, but with them closing down, I am rethinking my strategy.
I happen to have a bunch of SBC’s floating around in my house and am toying with building one as a dedicated Duplicati backup server that would copy data to a cloud provider. This brings my to my questions:
Would an SBC like an RPi3 or a Pine64 have enough compute power to serve as a dedicated Duplicati server?
Would the server need large amounts of storage?
Is there anything else that experienced users would suggest that I consider with this architecture?
Duplicati is “client only”, but maybe you want the SBC/RPI to copy data from the NAS to some other storage?
Duplicati requires Mono, but that runs on most stuff (RPI2 works fine), but the performance is not stellar.
Not really. It needs an sqlite database (aka the local database) to keep a cache of what files and blocks are stored remotely. This is typically less than 1GB, but depends on number of files, path lengths etc.
Sorry for the lack of clarification. (Funny enough, I actually work in the backup and recovery industry and so am more familiar with this stuff than the average person.) Just to clarify, my plan is to mount a NAS share permanently on the SBC and then have it backup automatically to the cloud. Hence, Duplicati is backing up data that it sees as locally stored.