Is Duplicati 2 ready for production?

What steps are you following in your DR plan? And where are you getting stuck? Also what size is your backup set?

Maybe I can offer some pointers. I have done a DR test myself and it worked well for me but I may have taken extra steps.

The simple answer is still absolutely not.

Version: 2.0.4.22_canary_2019-06-30

Is still extremely dangerously broken. This is the reason which totally ruins Duplicati. Backup restore fails, even if all tests pass perfectly. This is the ultimate trap. Stupid users and administrators think their backups are ok, and finally find it out when thereā€™s a need to restore data and bang, it wont work.

Itā€™s like having an insurance, which wonā€™t ever cover anything.

But this was already discussed here:

Just reminding now that the situation is same, and nothing has changed since.

TL;DR: NO

I have been using Duplicati for several months and observed that often it breaks out of nothing.

I have daily backups and the monitoring tool called duplicati-monitoring.com tells me that every 2 or 3 days a backup has a warning, the day after is okay and then starts over.

I often found jobs in an unusable state: could not add files nor restore them. Others had dlist files missing, database is broken and basically the backup is lost.

Sometimes backup duration is 10x o 20x what you would expect, without any obvious motivation (CPU, Memory and bandwidth OK).

It becomes VERY unreliable when backing up more than 1TB per job (at least in my experience) and takes ages to start such job because database grows quickly.

I still use it because itā€™s free and could not find a better alternative. Sometimes everything is great, sometimes you want to throw it down the flush.

1 Like

I believe some of the unreliability can be attributed to the back end being used. Some seem to be inherently more reliable (S3, B2, etc) than others (OneDrive) for this type of application.

Setting that aside, there are still some bugs to be worked out. I personally have experienced the ā€œunexpected difference in filesetā€ issue a couple times as well as ā€œfound inconsistency while validatingā€ recently. I think they may be caused by bugs in the compaction or fileset pruning processes.

I also wish the database recreation process was more reliable - on my systems it still seems to need at least some dblock files, where I believe it should only need to read the dlist and dindex files. The fact that it has to get dblock files seems to be an indicator of some underlying bug.

Perhaps that is due to an auto compaction event which also may trigger a database vacuum operation, or something similar.

Yes, the back end I am using is not super reliable. Nevertheless, Duplicati should also manage manage such situations and actually verify that files have been uploaded on the backend or use smarter retry systems.

1 Like

Has anyone used Duplicacy? I just started using it on one of my Linux servers to back up ~3TB of data. I really like its lock-free deduplication feature. Backup & restore work fine with some basic testing, but Iā€™m curious about the experience of people who have used both Duplicati & Duplicacy longer and in more complex situations.

Yes,I have used it. The lock-free dedupe is probably its killer feature. Awesome design! Personally I donā€™t like how each backup set only protects a single source folder - doesnā€™t work well for my workflow. The workaround is use symlinks but that seems like a kludge to me.

Why thatā€™s a killer feature? It suits only very specific operating environments. Every decision involves trade-offs. I didnā€™t find anything amazing about the lock-free approach here. Except this works well in cases where youā€™ve got set of systems, which share mostly same data and same encryption keys. At least in most of cases Iā€™ve seen, this is rarely the case. Because shared data usually doesnā€™t need to be backed up. And the private data which isnā€™t shared, canā€™t be naturally backed up with shared encryption keys.

Yet it would be naturally possible to use content specific encryption keys, as described. But it would of course require managing the content lists in a way where the information about non-shared data canā€™t leak. I guess itā€™s possible to transfer some of the keys from system to system to enable this kind of access. Then it would require the master list of data to be encrypted, so the chunks can use shared content specific keys. ā†’ Which would allow you to only restore data based on the master-list, even if youā€™ve got access to all chunks. Yet then you wouldnā€™t know which blocks are referenced and which ones arenā€™t, because you canā€™t have access to the master list(s).

Another drawback seems to be also obvious. When every chunk is in itā€™s own file, it might lead to situations where there are absolutely staggering amounts of files. Once again, depending from situation, protocols and platforms, this might be a problem (overhead, malfunctions) or not.

Every application is designed for some use case, and there are very different use cases. Also renaming files or moving to another directory, is something which some of the cloud storage services do not support. Yet, adding 0 bytes extra file indicating fossilization of chunks also works.

The ā€œkiller featureā€ term might have originally been coined by the Duplicacy developer in their forum:

Duplicacy vs Duplicati

pointing back to the Duplicati forum:

Duplicati 2 vs. Duplicacy 2

Both articles are a little dated now. Duplicacy has now gone over to web-based UI like Duplicati uses, however itā€™s licensed and extra-cost. The good/bad thing about that is it might be allowing the original Duplicacy developer to remain actively involved, whereas Duplicati might be in some transition mode.

Duplicati probably still wins on feature quantity, but has trouble with scaling and stability (i.e. itā€™s beta). Possibly its web UI is still better than Duplicacyā€™s new paid one (which I have not tested). Trade-offsā€¦ There have been Duplicati users leaving for Duplicacy. You can search this forum and theirs for notes.

By ā€œkiller featureā€ I mean something unique to Duplicacy - their stand-out differentiator.

(Enterprise backups have global dedupe as well but Duplicacy and Duplicati and others are not targeting that market.)

I agree there are downsides to lock free dedupeā€¦ Didnā€™t mean to imply there are not.

I did just run massive backup restore batch job to verify backup integrity and there were some errors, but not a single complete restore failure. That could be seen as a huge progress compared to earlier tests, which almost always included some totally non-restorable backups.If anyone is really interested, I can give bit more details in private. But this is all I can say publicly.

1 Like

Repair fails, seems to be years old issue.

Quick summary, for period test:
~83% backups good, worked like itā€™s supposed to and quickly.
~11% backups broken, but still restorable, very slow recovery step involved
~6% backups damaged beyond repair, unable to restore

This is nightmarish result. Something is still inherently extremely broken and software is very dangerous to use. I guess it would be a good idea to change the software being used.

Can you elaborate? Is this a database recreation step that is taking a long time?

While the software certainly has some room for improvement, your experience doesnā€™t really jive with mine personally. I have very few issues, if any at all, on my 18 backup sets.

I know you had some corruption on the back end, and Duplicati canā€™t really deal with that as it doesnā€™t include parity in the back end files. You could mitigate that risk by using more durable storage.

Sure I can, so when I run restore tests I log everything. And when restore returns code 2, itā€™s successful, but during the restore it has to read all dblock files from the directory trying to recover missing dblocks.

But ending up with this situation means that something is already seriously wrong, we were just lucky that the recovery was successful. It could have been worse, if those blocks wouldnā€™t have been available from other files. As far as I understand, thatā€™s the situation.

I can edit this message tomorrow and add the exact message and related key parts of the log of one of the failures, to be 100% clear. I donā€™t have it right now at hand.

And the totally failing restores, those end up with code 100. Iā€™ll drop a few log snips here as well.

Btw. Those initial results were only from ā€œsmallā€ backup sets, because Iā€™ve saved the large ones later. With the large ones, I assume that the failure rate is even higher, because well. Letā€™s just share the probabilities and execution time, amount of bytes transferred and stored. So I basically know that itā€™s going to be worse.

But Iā€™ll know that in a week or maybe two tops. These backup sets are measured in terabytes instead of tens of gigabytes.

But just as a generic reminder, always always test your backups regularly. With full recovery. Otherwise when you need your backups, well, itā€™s highly likely that thereā€™s nothing to restore.

Also one thing Iā€™m going to do, is install the latest version on all sources. Because that could be meaningful thing.

And if thereā€™s something good about this. At least the full restore error reporting works. It could be worse, if it would say ok, when everything isnā€™t actually ok. Also the final hash verification is good. When Duplicati says backup is successfully restored, it hasnā€™t been ever broken.

I suspect you are doing ā€œdirect restore from backup filesā€? (Which of course is the best type of test for a DR situation, not relying on the local database.) When you do this Duplicati does have to build a temporary database, and my guess is you are hit by a bug where bad dindex files cause dblocks to have to be read in order to create that temp database. Same issue when you arenā€™t doing a restore test but instead just recreating the local database.

There is a fix for this (at least in my experience) - a way to regenerate those ā€œbadā€ dindex files. But it requires a functioning database first. If you have the time and inclination it may be an interesting test. After the dindex files are regenerated, I am betting a direct restore from backup files will work much more quickly (at least the temp database creation phase).

I guess this is wrong thread for this discussion, but just saying that these are age old problems with Duplicati which are still unresolved.

Slow recovery, code: 2

Remote file referenced as duplicati-bf9fb2b38282d40bab8b6c31ffa1a685b.dblock.zip.aes by duplicati-i8c60e298171e421094efc4f16460ebcd.dindex.zip.aes, but not found in list, registering a missing remote file
Found 1 missing volumes; attempting to replace blocks from existing volumes

Totally fubared, code: 100

ErrorID: DatabaseIsBrokenConsiderPurge
Recreated database has missing blocks and 8 broken filelists. Consider using ā€œlist-broken-filesā€ and ā€œpurge-broken-filesā€ to purge broken data from the remote store and the database.

Iā€™m sure there are better threads for this topicā€¦ So letā€™s not continue hereā€¦

Anyway, when backups are systemically and repeatedly corrupted, itā€™s not a good sign for any program. Especially not a backup program.

Edit: Linked to this thread: Backup valid, but still unrestorable? Issue persists, after three years.

Iā€™ll confirm one sample case, and update that thread accordingly.

Hi to all!

Since the assumption of this thread, a lot of time passed.
I would like to ask you for a brief opinion:

  1. Is Duplicati stable and trustworthy for you after such a long time?
  2. Are there any magical problems that the most terrifying is ā€œDuplicati claims that everything is fine, and suddenly turns out that the backup is damagedā€?
  3. Does anyone changed Duplicati to another solution?

I know that there is still a Beta status, but some posts sound, as if Duplicati was in the pre-pre-alpha phase :frowning:

Personally, I need to use it on several Ubuntu machines and I wonder if it can work better in the Mono environment or worse than Windows?

The entire Duplicati project seems to have a great idea, brilliant design bases (whitepaper), fantastic functions ā€¦ but there was no programming power :frowning: (I am not talking about the competences of the creators, only about the number of programmers).
Iā€™m sorry, but I have the impression and I hope Iā€™m wrong.

Your opinions are important to me because I need a production solution. Currently, I am looking for a solution for Ubuntu, which is Open Source + has a GUI + deduplication + incremental copies + encryption).
Consider between Duplicati or Vorta (GUI for Borg) or a kopia.io

Cheers!

Hello and welcome to the forum!

Many people use it and find it reliable, but some people do have issues. (The posts on this forum are of course by people looking for support with issues. You donā€™t really see people post very often that DONā€™T have issues.)

There are also pending known issues that still need fixing of course. Whether or not those affect you depends on your use case. For me the pending issues arenā€™t a problem.

Youā€™re not wrong, it seems like we donā€™t have many people actively working on development. Volunteers are always welcome.

There are a lot of options available. I suggest you try several out and compare. Thatā€™s what I did when I stopped using my previous backup solution (CrashPlan).

Good luck on your journey!

Look at the dates on the posts in this topic. Personally, I had problems before 2.0.5.1_beta_2020-01-18. but debug/fix efforts prevailed on many of them. There are some unfixed, maybe due to the developer shortage. Others are rare, and poorly understood because theyā€™re rare. Reproducible test cases can help immensely.

Good practices for well-maintained backups has things you can do to attempt to keep things happy and fast.

For a sense of scale, https://usage-reporter.duplicati.com/ shows 4 million backups/month. Not all post. :wink:

Keep multiple backups of critical data, maybe by different programs. Reliability varies but none are perfect.
If youā€™d consider a licensed closed source GUI on top of an open source engine, some folks like Duplicacy. Comparisons are hard due to different user base sizes and uses, so personal evaluations become helpful.

1 Like