Duplicati 2 vs. Duplicacy 2

This is a “wiki” post for comparing Duplicati (DT) against Duplicacy (DC @ https://duplicacy.com/).

As a wiki ANY user can edit it to add/update features of either app that they may have more experience with than I do. Enjoy!

Feature comparison

DT DC Feature
x * Free (*DC command line free for individual users, else per user or computer fee)
x x Client side software
Server side software
x * Source code available (*DC source code for CLI only)
x x Encryption
x x Only back up changes
x x Block level de-duplication
x Lock-free deduplication (DC uses destination files names instead of a local database resulting in lower local storage needs)
x Cross-source de-duplication (possible with DC thanks to lock-free deduplication)
x x Supports various cloud storage providers
x x Supports local storage
x x Supports SFTP servers as destination
x x Has command line
* x Has GUI
x x Mac OS X
x x Linux
x x Linux LVM snapshots supported (DT seems to have better support)
x x Windows
x x Windows VSS shado copies supported (DT seems to have better support)
x x Can run as service/daemon
x x Supports service/daemon network share authentication (DT seems to handle it better)
x x Supports multiple retention polcicies (DT seems ‘better’, closer to CrashPlan type options)
* x Supports multiple destinations (*DT requires a bakup set for each destination, but multiple sets can all have the same source files)
x Concurrent backup (Multiple clients can back up to the same storage at the same time)
* Backup summary reports (*DT not built in, possible with dupReport)
x Variable block sizes

x=has feature, *=has feature sort of, ?=MIGHT have feature

Backup destination support comparison

DT DC Backend
X X Local file, network share (DT can authenticate to SMB share)
X FTP/FTPS (FTP, Alternative FTP)
X X SSH/SFTP
X X WebDav
X X Dropbox
X X Amazon S3 and compatable (ex. Wasabi, Minio)
X Amazon Cloud drive
X X Google Cloud Storage
X X Microsoft Azure
X X Backblaze B2
X X Google Drive
X X Microsoft OneDrive
X Microsoft OneDrive for Business (OD4B)
X Microsoft SharePoint
X X Hubic
X Box.com
X Rackspace CloudFiles
X Jottacloud
X mega.nz
X OpenStack Simple Storage
X TahoeLAFS
* * pCloud (via WebDav)
* sia.tech (via blockchain) *experimental status

Backup source support comparison

DT DC Backend
X X Local file, network share (DT can authenticate to SMB share)
SSH/SFTP (DC website inacurately states support for SFTP as a source) Note that SSHFS can be used.
9 Likes

Nice summary, @JonMikelV :slight_smile:
You might also note that duplicacy source code is available for cli tool only.
I would also suggest to mark the following when comparing backups:

  • Support for VSS on Windows and LVM snapshots on Linux - Duplicati has better support based on my testing (in Windows)
  • Support running as a service/daemon - haven’t checked the full duplicacy package, but cli can be wrapped for built-in scheduler
  • Supports authentication to network shares while running as service/daemon - Duplicati wins here as far as I know.
  • Support for various retention policies - duplicacy has better retention, closer to what CP had.

It is really hard to evaluate products like backups - different people look for different features… I care less about user-mode backups, but really like the way Duplicati runs as a service and cli version flexibility…

1 Like

dgcom, thanks for the tips - on the bullet list items are you saying both tools support all 4 items but you think Duplicati does it better?

I’m asking because I haven’t actually done anything with Duplicacy beyond reading their web site… :slight_smile:

I’ll see if I can get that first post shifted to a Wiki so others (such as yourself) can directly edit things rather than having to go through me.

From the four items I listed, Duplicati is better at the first and third ones.
Just checked and UI version of duplicacy can be installed as service as well.

I just remembered one more feature of duplicacy which is not yet (unfortunately) available in Duplicati - ability to backup to multiple destinations. Again, not sure if it works in UI, but cli version can add another destination to the default one assigned for specific source and then copy data from default to that other destination.
Granted, it is not a simultaneous backup, but still a great option which avoids re-compression and re-encryption of the source.
On the other hand, I can simulate this now with rclone or similar tool…

1 Like

Yeah, I would live multiple destinations from a single source as well. And the way it sounds like Duplicacy does it’s block handling (with remove file names vs. a local DB) is actually pretty cool and fits PERFECTLY with multiple destination features.

1 Like

BTW: A good way of showing your appreciation for a post is to like it: just press the :heart: button under the post.

In the Support category, if you asked the original question, you can also mark an answer as the accepted answer which solved your problem using the tick-box button you see under each reply.

All of this also helps the forum software distinguish interesting from less interesting posts when compiling summary emails.

Done. Hint: you can flag your post and ask for it to be turned into a wiki.

2 Likes

I find this one quite fascinating:

Concurrent backup
Multiple clients can back up to the same storage at the same time

Added it to the OP.

Oh, just realized that @dgcom also mentiined it above. I’m not sure I understand all of what you’re saying, though. Are you implying that there should be a * for duplicati?

Well, in this case I’d say Duplicati should get an x since the Feature is that multiple CLIENTS can back up to the same destination at the same time - which I’m pretty sure Duplicati can do (unles the destination somehow restricts concurrent logins in some way).

If we were talking about a SINGLE client backing running multiple backups to the same destination at the same time (maybe you’ve got an hourly Documents backup but only a daily Photos backup that both happen to fire at the same time) then I’d say Duplicati should stay empty.

As far as I can tell a single backup job can be multi-threaded, but only one backup job (or any tray based command line) runs at any one time. Note that I could be wrong here as the tray/web UI doesnt’ seem to be designed to show multiple tasks running at the same time.

Yes, that it can do. But my understanding is that duplicacy not only allows multiple backups to the same destination but to the same archive, including block level deduplication so that if a copy of the same file exists on multiple machines, it will only be uploaded once. That is pretty fascinating! Or am I misunderstanding something?

This is my source:

However, if concurrent access is required, an unreferenced chunk can’t be trivially removed, because of the possibility that a backup procedure in progress may reference the same chunk. The ongoing backup procedure, still unknown to the deletion procedure, may have already encountered that chunk during its file scanning phase, but decided not to upload the chunk again since it already exists in the file storage.

Based on that quote and how Duplicacy appears to use the file names themselves to identify blocks then it would make sense that it “natively” supports cross source de-duplication.

We know that at present Duplicati does NOT support that (though it has been mentioned before that it could) so perhaps a new Feature for “Cross source de-duplication” or “Cross-backup de-duplication” should be added with blank for Duplicati and ? for Duplicacy until somebody can verify our assumptions?

1 Like

As far as I understand, it is not recommended to have different clients and different jobs to backup to the same location (folder) because it can cause conflicts…

1 Like

I don’t want to annoy anyone, but I think I found another little thing to bug people about:

If you refer to some other post on this forum, it would be great if you could provide a link to that post.

The link will automatically be visible from both sides (i.e. there will also be a link back from the post you’re linking to). Those links will make it much easier for people to navigate the forum and find relevant information.

1 Like

Hi,

I have been testing both with Backblaze B2 and am still trying to sort out which I will adopt. One big difference is speed/deduplication. They take two different approaches. Duplicacy by default relies on file size/time stamp differencing unless you specify the “-hash” tag which is not the default while Duplicati chops files into much smaller pieces for more efficient data reduction. These differences make the two products hard to compare. Here is the results from a test uploading 3GB of actual data with both solutions.

Initial Backup
Duplicacy: Upload time ~45 minutes
Duplicati: Upload time 1 hour, 9 minutes.

Follow-on backups:
Duplicacy: ~30 seconds
Duplicati: ~2:40 seconds

In short Duplicacy seems much faster although Duplicati is storing about 200MB less data. To be 100% clear, I am running this on a small single board computer running Ubuntu dedicated to this purpose, and so compute power, memory and bandwidth are limited. (Your performance is likely much faster.)

I really like the Duplicati GUI and there is no GUI for Duplicacy on Linux; however, I find the performance difference significant.

I will continue my tests and welcome any feedback.

3 Likes

Am I right to understand that the default behavior for duplicati is to not rely on timestamps and examine file contents on each backup? Is there an option just to rely on timestamps?

A bit of background - in my long term plan, I have two data sets that I plan to backup -

Logging/Records data set (relatively small 10mb but frequently changing)
Archive data set (large ~200Gb but low frequent changes, usually just additional files and extremely rare for file edits)

If there is an option for timestamps, then I suppose the backup refresh of the latter would be much much faster (and less wear on hard disk but that is super minor)

Take a look at --check-filetime-only, that might do what you want.

BTW, is there any documentation for the advanced options? If not, I guess I need to search the Google Group…

Not sure on process on joining out team for helping out development of duplicati - but (depending on scope) I am happy to volunteer to maintain documentation on this (and others?). I am imagining a list of the options with some explanations etc.

Any ideas on who to PM for this?

1 Like

Please, run the following in the command line:

Duplicati.CommandLine.exe help advanced
1 Like

That would be @kenkendk. Or, more broadly @staff.

That said: I don’t even think you need to bug anyone about this (let alone ask for permission). You can simply create a topic in the How-To category and update it whenever appropriate. :slight_smile:

That would be fantastic help and much appreciated!

1 Like