I have seen discussions about a duplicati server/client relationship here before but this idea is (somewhat different).
Sure a centeralized server would be nice as it would be able to centrally manage configs and see backup status.
I think another possible benefit could be a built in transport protocol server (like sftp mabe) listening on a nonstandard sftp port. So this server would have storage capabilities.
Finally, it could possibly offload transport and CPU tasks. For instance, the server could handle backup validation rather than client downloading files and testing like that. It could also be used for rebuilding the database of the client because the client should be able to say “what do you have” and the server could say here is the file list and a hash or whatever. Limiting or eliminating, the need to copy everything back to a client over the internet to validate what is/isn’t backed up.
Problem is I have no idea how to make this. and it would require significant amount of time i am sure. but it may be worth it. i guess
Storage Providers shows that Duplicati supports a big variety. It asks very little beyond simple storage.
Backup companies that provide software and storage as a combination presumably use special server software to help them with things like record-keeping, backup-checking, easy restore to new drive, etc.
I believe CrashPlan Home used to have that for home users, but it’s discontinued. Other services exist.
Until when/if Duplicati (or some third party) sells bundled storage, users are left to provide their own…
For those who are willing to set up a server and maybe risk losing it to disaster, a solution mentioned is:
For business users, one trend (visible even in in newer CrashPlan) is towards centralized management.
md5 & sha1 inclusion in verification json #2189 would probably speed up cloud file integrity verification, however some cloud providers do their own periodic error scrub, and also have upload integrity checks.
CrashPlan Home used to support peer-to-peer backups (the standard client was also a backup server). There has been at least one attempt (I forget its name) to bundle storage software to get near that idea.
So – lots of potential things here worth discussing (and there are other points that could be gotten into). Some of these are likely already feature requests here or in GitHub issues. I’ll let someone else look…
A standard problem for feature requests. There are always way more than can possibly ever be done…
to clarify the setting up of a storage service isn’t the core of my idea (however it would be require I’d think it would be needed to make my actual idea work) My main thought is the ability to have a PC/service/server run at a remote site that could do backup validation, and more importantly aid in the rebuild of the local pc database. In my experience it seem Duplicati download the entire backup or (large portions of it) in order to do a database recreate/rebuild. Over some slow WAN connections this is less than ideal. and a server that to intermediate there and improve that would be amazing. and if your going to do that you are close iff not already at the point where it could run the storage software and possibly even be the central management console.
I had some ideas for that a while back. The current Duplicati implementation is targeted at “home users”. If you are using a company-wide (or maybe just family-wide) installation, there are some tricks that can make it better.
One idea was to have a management server that would keep track of blocks. That way each backup would send the blocks to the server, and it would store them somewhere. When you need to restore, the management server knows where all the required blocks are located, and can grab them and return them. This removes the need for storing multiple blocks inside a zip archive, and can be made much for efficient.
It is not implemented like this in Duplicati, because it requires co-operation from the server, and that rules out storage targets like S3, Google Drive, and OneDrive.