Is Duplicati 2 ready for production?

Hello

I have two Windows users, which need to back up their data via slow uplinks (One has about 10mbit/s, the other probably 1 mbit/s), while initial sync can be done locally. In the past, I used Duplicati version 1 for this (With one initial backup, and then only differential backups). Unfortunately, the restore didn’t work when I needed it, and some data was lost (at least I was able to recover some files using scripts and the various CLI tools from Duplicati 1). The target server is a Linux system, scp Is my prefered protocol, but I could also set up FTPS.

Because of the disaster with Duplicati 1, I need a new solution, which can handle my scenario.
How reliably is Duplicati 2? How easy is it to recover data, if it fails in some way? If it isn’t ready for production, do you now know other software which fulfills my requirements (The ones I looked at so far did not)?

Greetings

Peter

Hi Peter,

Version 2 is still in beta, but it is very reliable so far. I check the forums frequently, and as far as I know there has not been any data loss when Duplicati is configured correctly. I have used it for around 1 year in tandem with a production ready backup software and have so far done all my restores using Duplicati. As long as you have access to the backup files than you’re pretty much guaranteed a restore.

1 Like

Hello, In my opinion Duplicati is pretty safe to use, but there are still some danger - so do not have it as single backup SW.
Even if the data is eventually recovered through the recovery tool, it is lengthy proces.

For example, reboot during backup recently caused me big unresolved problems.

1 Like

I run it on 10 computers and think it’s quite stable. But I would stay away from experimental or canary releases. Just use “beta” releases for now, version 2.0.3.3.

1 Like

Duplicati 2 is very different from Duplicati 1 in its approach, as explained here and in other articles, so some old risks go away but some new ones emerge. There are quite a lot of self-checks, including sample verfication just after a backup. If something serious is ever discovered, the difficulty of getting everything back to normal varies.

There are other things which aren’t totally a production-ready question, but a limitation of the design that relies heavily on a local database to keep track of what’s on the backend. Sometimes views can be found out-of-sync. Restores, though, don’t require the database, and there are several other levels of disaster recovery available.

What’s “production” anyway? What worries me the most is people who put files in Duplicati then delete originals. Duplicati is not meant to be that sort of archiver, and risks of something going wrong will grow over long periods.

1 Like

@mr-flibble Well, problems with a reboot during backup sounds not very userfriendly. I can’t rely on a user to not shut down his computer after he has copied a large amount of data to the computer (for example from a digital camera) until backup has finished its job.

@drwtsn32 I wouldn’t want to run a canary release, the documentation is pretty clear about this.

@ts678 I understand the difference in the design of sersion 1 and 2. The problem I apparently run into with Duplicati 1 was most probably a chain of incremental backups, which was broken somehow and I ended up with an incomplete set of data, and that was AFTER I spent a lot of time with manual disaster recovery.

So I thought about the definition of “production”, in my case, I need a solution to backup two or three windows client to an scp-server (some commercial software charges extra for scp or doesn’t support it at all). I my scenario, I would define the following points for production ready software.
1: It needs to run reliably without user interaction and/or regular manual control by the user on the client computer (if there is a real problem, the user should get an alert).
2: After an initial local sync, it needs to be able to make regular backups over a slow uplink without regular full backups (some of the software I evaluated can’t do this).
3: It needs to be able to restore all user data after the ssd/hard drive of a client computer stops working, with only server credentials and encryption key at hand (Even if the last full backup was years ago).
4: It needs to have some history of backups. I don’t need to be able to restore every single day for the last decade, but again, if the initial backup was done 5 years ago, I still need to be able to make a full restore of last weeks data, if the SSD stops working.

This is still feeling very subjective, however that’s what we’re left with given few hard statistics. See usage of roughly two million backups per month. You can scan the forum and issues for problems (and determine if it looks less reliable than you accept for your production), and the releases to get ideas of where changes go.

Duplicati should grow more reliable and have fewer rough edges as it matures (at some point the Beta label should disappear), but it’s useful to lots of people, and is far better than no backup (and two is even better).

Rough edges are easier to have opinions on, but as just a user I can only guess at futures from posts here.

From your list, scheduled backups can be done either at user login (default), or as a service (rough edges).

A “real problem” is subjective, but no on-screen alerts will happen until when/if a Warning or Error happens, which causes a popup on the Duplicati page in the browser (and changes color of tray icon if you have one). Rough edges then include having to figure out which of several logs details the issue, then make sense of it. Recovery is sometimes easy, sometimes less so. Tools are imperfect. There seems some planned work here.

Alerts by email, http (maybe to a monitoring service), etc. are quite configurable. Rough edge is consistency.

Slow uplink is easy because only changes are uploaded unless a compact is permitted (it grows busier then).

Total drive failure should not be a problem, but depending on the backup size (including versions kept) might interact with Internet and other speeds to make it slower than you’d prefer to get everything running normally. Restore is usually a point-in-time view, so “last weeks data” may include the files that have been there awhile. Old files might be more subject to undetected destination damage. BTW scp is not out yet, but you could ask. –backup-test-samples or The TEST command can be used to test health without actually doing a big restore.

If you’re not on a metered connection, maybe you can do your own testing to simulate the recovery you seek.

I would personally say NO. Because I’ve been doing exactly that, deploying the software in full scale. And there are still every now and then issues which absolutely drive me crazy.

  1. Invalid file sizes
  2. Invalid file hashes
  3. Manual fixing of environments, deleting files, running repairs and so on, which might be really slow
  4. Complete backup set recreations due to more serious issues
  5. Ultra slow restore (not always!)
  6. Corrupted backups -> Restore 100% failed & loss of confidence to software reliability & usability
  7. Sudden situations where the backup run time is 20x whats expected without any visible reason (like compaction) or disk / network congestion.
  8. Program blows up and refuses to run after update

Those are things which are totally and absolutely no no, for any production use. I’m wishing to see fixes for the issues reported.

After the issues I’ve listed, I’m super happy. Because the features which Duplicati 2 provides are totally awesome.

Have you done restore tests? Otherwise you don’t even know if it’s working. That’s one of my worst worries. I’ll try to run full restore tests monthly, but even that might mean there could be some broken sets which would be a real disaster.

Always a good idea! If you haven’t seen it already, this might be useful:

In my opinion, identifying a broken set during a test is a good thing as it gives me a real view into what I can restore.

Plus, it lets me look into the issue when I’m not in crisis mode while trying to restore data. Depending on the ultimate cause of the issue I then have the opportunity to reevaluate all parts of the backup - sources, connection, destination, and yes Duplicati itself.

1 Like

Hello
Today I use on 15 clients in parallel uses another backup solution called Acronis.

I believe that one more year the duplicati will have stability.

The biggest problem I think is in the sqlite database, I really believe in the developers.

There is a little investment left on our part, I will donate $ 30 a month monthly.

If good part made it this product would be top, there is no perfect product without investment.

For being Open Source I just have to thank the development.

Anderson

2 Likes

AFAIK, test requires local db. I want to do actual real world and authentic restore test where data is restored from backup location, not just checked against the local database. As far as I know, the test won’t run without the local database, which of course isn’t available if you’ll need to do full restore.

Did I get something wrong? If so, it would be great. But I doubt it.

Although it takes disk space, restore with –no-local-db, or from the web UI “Direct restore from backup files”. would be a good test of Disaster Recovery. The TEST command unfortunately doesn’t accept --no-local-db.

1 Like

Yep, yet the test doesn’t mention that requirement as clearly as help text for some like affected does. That’s why I’ve automated the full restore process. Anyway, it’s good to do full restore, because it also checks the final restored files for validity and so on. Cost of disk space is absolutely negligible in this case.

Automated restore test will restore all backups in succession doing further backed up DB validity / corruption checks meanwhile and then deleting everything and testing next data batch. - This is important, because even if the backup software is working perfectly, some other failures could cause a situation where the database is technically corrupted.

I would love to be able to run the test run without having local db. Of course I could first build the local db and then run the test. But I don’t know if that’s worth of it? Full restore does the rebuild anyway, and as mentioned, it’s way better test than the Duplicati’s internal test.

It’s especially way better than the internal test if the backup settings don’t set --backup-test-samples above the default 1. Should backup-test-samples be changed to percentage of backup set? suggests a way to scale that.

Well, one way could be testing all fresh files + some old randomly.

Anyway, what then if test shows that something is wrong? There’s pretty much nothing you can do about that. Because Duplicati still fails with latest canary. All repair options are more or less broken. Test, Repair, List-Broken-Files and Purge-Broken-Files. Test fails, but there’s no way to remedy the situation, other than deleting everything and starting backup set from clean slate. Restore keeps failing, even if backing up the data shows deceptively that it’s all good. - Program logic is still very seriously lacking and restore fails.

Serious things like this, are the exact reason why the software shouldn’t be used in production. It won’t work reliably or in a sane way from logic / integrity point of view. -> Very dangerous and deceptive to use. -> All efforts and resources used to create backups, which can’t be restored, are absolutely and literally wasted energy.

I’m forced to completely recreate the 264 GB data backup set which was over 100 GB as Duplicati files, because those are unrecoverable due to bad software logic. Probably very small operations would have been enough to fix the backup set, but that logic is missing from the software. Or I could have manually try to delete more files, but that would have probably been uncertain outcome. These are exactly the things which shouldn’t be manually handled.

1 Like

oh boy. all of the complex issue i read about here has me second guessing using duplicati for my home DR solution. I want to set it and forget it until i need it and don’t want to have to deal with all the esoteric problems that many here have with it. My luck is that when i need to do a restore, it won’t work. Maybe i’ll go with Cloudberry for now until more bugs are worked out and the product continues to mature.

1 Like

I have it running for over a year without issues. I do a test restore from time to time. All goes perfectly fine. Running the beta.

could you share your options configuration?

I haven’t tried Cloudberry myself but haven’t heard anything bad about it…

Keep in mind that almost all posts here are focused on fixing an issue, so you don’t see anything about the thousands of backups a day where it all works.

Generally, Duplicati either works well and keeps doing so or it is troublesome from the start - and remains so. We still haven’t figured out what are the triggers for one outcome vs the other. :frowning:

Of course there’s nothing that says you can’t run two different backup solutions at the same time! :wink: We just want you to have SOMETHING doing backups because some (many?) of us know how painful data loss can be.