Correct me if I’m wrong, but I believe CP does a better job here in that it deduplicates across backup jobs, i.e. a file that’s already in the cloud will not be uploaded again, no matter what. Right?
SO a big feature I’ve been seeing people complain about losing with crashplan is their ‘backup to a friend’ system. The way it works is that you give out a code that your friend enters, and they can use you as another destination to back their own files up to.
Now, Duplicati doesn’t have that insofar as it’s not as easy as entering a code, but that functionality can be accomplished by setting up the file server you back up to in such a way that your friends can use it over FTP or any of the other supported methods.
Does this feature deserve a star or a plus?
Not more than a star, I’d say.
tophee (sorry, stupid auto-correct), I guess I always assumed CP de-duped only within a backup set (just like Duplicati does). So even if I had multiple sources going to a single destination in CP the wouldn’t de-dupe across b them (potentially due to different encryption keys).
As I understand it Duplicati does not re-upload files unless there is a change or as part of archive maintenance (such as history cleanup resulting in merging of multiple small archives).
I know somebody who works at Code 42 but I don’t know that he’d be able to confirm or assumptions one way or the other.
Okay. Or wait: does CP have separate archives (and hence encryption keys) for each backup job? I thought it was one archive per client so that if I create multiple backup jobs on the same machine, de-duplication would work across backups.
tophee, no - you’re right. I believe there’s a single key for each CLIENT not each job and all jobs for a single client go to a single location and are de-duped as a set. I guess I was trying to say that if you’ve got multiple clients going to a single destination they would each be de-duped individually.
As for a Duplicati, I’m pretty sure each backup job is only de-duped on itself - so even if you have a single client with two different backup jobs going to the same destination (obviously to either separate folders or with a
--prefix setting to distinguish the file sets) they will be de-duped separately.
I guess another way to look at it is in your C:\Users<user>\AppData\Roaming\Duplicati folder (assuming Windows non-server install) de-duplication will only occur inside a single xxx.sqlite file. So you make two backup jobs and end up with both xxxA.sqlite and xxxB.sqlite files I’m pretty sure they’ll de-dupe independently.
Unfortunately, this is just my guess and neither of the “how it works” pages specify what happens in this scenario.
So I guess we’ll have to bug @kenkendk and ask - if a single client has multiple backup jobs that happen to include some of the same files, will de-duplication happen for each job individually (so a shared 100M file will be backed up twice, once for each job) or across all jobs on the client (so a shared 100M file will be backed up only once, no matter how many jobs point to it)?
Deduplication indeed doesnt work across multiple backups. In theory, it could be implemented, but this question already has been answered by @kenkendk:
Second the comment about single source to multiple destinations
Right now I am benchmarking several storage back ends and have eight (!) jobs of the same folder. It sure was a hassle to set up and maintain.
In the long term expect at least two, maybe three backends so still worth to implement
Personally, all I “need” is sharing of Source settings across multiple backup sets on a single client. Everything else makes sense to me to vary by destination / outgoing IP.
5 posts were split to a new topic: Benchmarking different storage providers
I agree this is a missing feature. At least the ability to “link” or “clone” backup sets would be cool in the future. If you have specific folders or other complex backup rules, it can be a pain to recreate the job. Especially hard to remember if you are making changes later.
When i need to configure the same backup source to multiple destinations, i simple export the backup configuration and import it again, changing the “Backup Name” and “destination” to new ones.
I am someone moving from crashplan. Duplicati seems great; i am still testing it.
I am missing “Backup only when host is reachable; retry backup that was missed due to connection issues” It is supported by crashplan, but not by duplicati. I do not know how this feature is called, but I described it here: Backup only when host is reachable; retry backup that was missed due to connection issues
Would it be within the scope of your list to have a line for (i.e.) “will still work after October, 2018”?
(FWIW, my CP will run out this october since I was a monthly payer, so glad i’ve found Duplicati and B2)
I guess it depends on what you mean by “work”.
Maybe you and DennissDD can put your heads together for some scripts like what he described here (at least until official support gets added).
You install mono on your RPI, or using Docker ?
I installed mono on raspberry pi using packages from raspbian
HI Aragon !
I see. I am planning to build a rpi docker image for duplicati few days later. I tried to install keypass in RPI (which needs mono) which messed up my system.
I would tend to agree with tophee that this is “star”, not quite baked but usable. As a new user I can’t add it to the list or I would, but I understand it’s a wiki now.
x Positive notifications (email if backups NOT run in a while)
There are now at least two ways to achieve this:
- https://www.duplicati-monitoring.com/ is a monitoring service that collects Duplicati’s backup reports, provides a nice dashboard and sends daily report e-mails. It is a service provided for free, nothing you need to install yourself. (Disclaimer: this service is developed by me).
- dupReport is a similar solution written by @handyguy in Python. This is a self-hosted open-source solution.
So @JonMikelV, maybe you could add some note to the comparison saying that there are solutions for Duplicati.