Duplicati vs Duplicacy vs Kopia vs Vorta

Hi all,
basically I nearly ready to go with Duplicati as my backup software, but was wondering if you tested the other solutions aswell and what made you stay with Duplicati? Or did you even see some benefits from the other software?

Price or yearly subscriptions are not important for me. I will do mostly “basic” backups to OneDrive and Google Drive. There will be one important backup set with about 100 GB and a second standard backup set with about 1 TB. Changes are not so often, so I not care too much about if a backup needs 5min or 10min. Also restores will only needed if my local backup solutions fail or get destroyed. So in the worst case I don’t care if the download needs 30min or 1hour. Important for me is the safety of the backup and that I can use it with a new pc, new OS installation if my current pc dies.

Looking forward for your opinions.

1 Like

Hello,

For one thing, Kopia is way faster:

Cpu: fx 6330
skyrim - 2GiB.

duplicati (zip, default)- 3:18m - 928 MiB

(kopia has different compression modes)
kopia-zstd - 55s - 878 MiB
kopia-zstd-better - 91s - 851 MiB
kopia-zstd-max - 367s (3m) - 800 MiB
pgzip - 29s - 955 MiB
pgzip-max - 148s - 915 MiB

fallout 4 - 4 GiB

dupliacti (zip, default) - 6:42m - 1.79 MiB

kopia-zstd - 67s - 1.64 MiB
kopia-zstd-better - 121s - 1.61 MiB
kopia-zstd-speed - 53s - 1.73 MiB
kopia-zstd-max - 622s (10m) - 1.54 MiB
pgzip - 52s - 1.76 MiB
pgzip-speed- 46s - 1.91 MiB
s2-def-4 - 36s - 2.02 MiB
s2-def - 34s - 2.02 MiB

Restore times are similar though (Kopia restores Skyrim directory in about 2min, zstd-max). Kopia doesn’t have native OneDrive client, yet. Also, main benefit of Kopia, for me at least, is all the configuration files are in the destination. So no worry about corruption of database or loosing configuration files. Kopia scales very well with big backups. But, on the other hand, Kopia approach to the structure of backup is pretty different than Duplicati, Kopia operates at repository level (something like virtual drives), where one put different source directories. Imho, Kopia requires reading documentation, while Dupliacti is very simple to set and run.

4 Likes

Thanks for your input!

I also read a bit in the different forums. Currently I’m thinking that Duplicacy seems a more enhanced version of Duplicati. So far, I couldn’t find any negative aspects of Duplicacy.

1 Like

The negative aspect is the lack of free web UI. Personal home license isn’t expensive though.

2 Likes

I tested it before settling on Duplicati. In general I like Duplicacy but my main complaint is that you only really can have one source folder per backup job. (Not sure if this is still a design limitation or not.) The workaround at the time was to create a folder and then make symlinks to all the various folders you want to back up, but this seemed really clunky to me.

I also didn’t like that the entire project wasn’t open source, just the command line version.

On the other hand Duplicacy’s lock-free deduplication was pretty impressive. It also seems to have a more active development team compared to Duplicati today.

1 Like

I think this have changed now. I will start to make small tests with Duplicacy in the next days, as I see it as the best solution currently on the market.

Kopia would be my #2, it looks very promising for the future if they can release a stable (non beta) release. But for the start the concept is very good.

Duplicati is my #3. A possible database corruption makes me scare… Maybe it’s not that big deal, but I think the other solutions are more suitable for me.

1 Like

Is backup corruption still an issue with Duplicati? I dropped Duplicati for 5 years, but trying again now.

1 Like

there are complaints about it on the forum. What is the reason is though to know since most users are complaining then dropping out when it comes to investigating. Dropping out, as in ‘disappearing without even a good bye’. So what’s the matter really ? Duplicati as a true open source project attracts low resources users doing backups on consumer grade hardware, subject to all the failures of this kind of hardware: no ECC Ram, no server grade hard disks / SSD, no UPS, cobbled together NAS from (again) consumer hardware configured by Linux beginners.
There never was in years of complaints any kind of precise report to understand what could be the real problem. The main cause was that Duplicati is so slow to rebuild the database when something really nasty has happened on the backend (for whatever the reason, good or bad) that nobody cared to wait. Maybe this could change for the next release, though; see this thread:

Things are better, but not perfect.

The question posted above replied to a database corruption concern, asking about backup corruption, making one wonder what the concern is, but actually they’re sort of tied together and both are needed.

The database is probably here to stay, but in a sense adds value because it can verify the destination, however if they disagree, which one is correct? There’s a bug fix in queue where database got wrong:

Fix missing file error caused by interrupted compact #4967

The database self-checks itself, making messages like Detected non-empty blocksets with no associated blocks! which I believe you were seeing. That issue is one that seems to be fixed now.

My opinion is that the bugs from 5 years ago where Duplicati fell over by itself are largely weeded out. Environmental situations such as power loss or other interruptions in less-used operations are slower troubleshooting because there’s usually not enough history, even if end user were willing to work hard.

What would help would be additional well-equipped test volunteers able to torture test it while keeping adequate logs and other history, thereby giving developers a fighting chance to find the troubled code.

Compared to 5 years ago, https://usage-reporter.duplicati.com/ shows about twice the backups, but the report rate of database or destination corruption is vastly lower, but it’s not zero and will likely never be.

It stays well far better now, but if things go wrong, they can still be a pain to diagnose and recover from. Possibly this is the fate of any backup program that uses complicated layouts that can get out of whack.

3 Likes

@ZebCorp Thanks for the info.
Last time I tried Kopia there was no way to send report emails, or at least not automatically. Do you know if that is already implemented?

Too complicated to implement a backup

@bluecat
You can also try Restic.
https://restic.net/

Open source and free. No web UI but it works really fine.

I am currently using Duplicati + Restic

I would like to be able to help more but I only have a notebook and my backups are small.

No, Kopia doesn’t support that feature (e-mail reports) for now. It support post/after actions, where you can do all sort things, though. I’m using/testing all three great backup solutions (Dupliacati, Restic and Kopia) and I can’t decide which is “better”, each has own merits and limitations. Kopia and Restic are fast, Restic is more “script friendly”, Kopia is very convenient (backup itself contains all necessary data), Duplicati has very useful web server UI (and working with Google Drive, where Kopia and Restic not).

1 Like

@ZebCorp Great info for me.

I am currently testing Duplicati and Restic in real operation.
Duplicati with Duplicati-Monitoring.
And Restic with pings to https://healthchecks.io/.
I suppose the same can be done with Kopia, that is, receive a Ping with the backup report.

Or I could also send an email from a script made executable. I have that solved, it would be a matter of implementing it.

“Kopia is very convenient (backup itself contains all necessary data)”

Is the same on Restic I think. Right?

“(and working with Google Drive, where Kopia and Restic not).”

I’m using Restic with Google Drive without any problem. I configure rclone with storage option 18 and before that I generate my own client ID. It’s a bit cumbersome but following the guide I can do it.
Any questions if I can help you with pleasure.

:grinning:

They are close, but Kopia repository contain all source backup data (source data, rules, filters, etc.), Restic repository contain “only” backup results but source data, rules, filters are not included (which is handy for scripting). As I understand, for fully self-contain Restic set, you have to backup its own backup rules, sort of Duplicati database.

Imho, in case of Google Drive Duplicati is far more convenient than Restic with Rclone or Kopia. But yeah, this is a valid option.

1 Like

As I understand, for fully self-contain Restic set, you have to backup its own backup rules, sort of Duplicati database.

Thanks for the info

Imho, in case of Google Drive Duplicati is far more convenient than Restic with Rclone or Kopia. But yeah, this is a valid option.

Thank you very much again. At the moment I’m still using Duplicati and Restic. Both have their advantages and cons. What stops me a little with Duplicati is the database reconstruction times in case of total loss.

1 Like

Progress should be made for this problem in next release.

1 Like

Thank you very much for the good news. Actually, after years of using local backup programs, the first free and open source program that does everything I need is Duplicati. If that point is solved or improved it would be great. :grinning: