Is Duplicati 2 ready for production?

Repair fails, seems to be years old issue.

Quick summary, for period test:
~83% backups good, worked like it’s supposed to and quickly.
~11% backups broken, but still restorable, very slow recovery step involved
~6% backups damaged beyond repair, unable to restore

This is nightmarish result. Something is still inherently extremely broken and software is very dangerous to use. I guess it would be a good idea to change the software being used.

Can you elaborate? Is this a database recreation step that is taking a long time?

While the software certainly has some room for improvement, your experience doesn’t really jive with mine personally. I have very few issues, if any at all, on my 18 backup sets.

I know you had some corruption on the back end, and Duplicati can’t really deal with that as it doesn’t include parity in the back end files. You could mitigate that risk by using more durable storage.

Sure I can, so when I run restore tests I log everything. And when restore returns code 2, it’s successful, but during the restore it has to read all dblock files from the directory trying to recover missing dblocks.

But ending up with this situation means that something is already seriously wrong, we were just lucky that the recovery was successful. It could have been worse, if those blocks wouldn’t have been available from other files. As far as I understand, that’s the situation.

I can edit this message tomorrow and add the exact message and related key parts of the log of one of the failures, to be 100% clear. I don’t have it right now at hand.

And the totally failing restores, those end up with code 100. I’ll drop a few log snips here as well.

Btw. Those initial results were only from “small” backup sets, because I’ve saved the large ones later. With the large ones, I assume that the failure rate is even higher, because well. Let’s just share the probabilities and execution time, amount of bytes transferred and stored. So I basically know that it’s going to be worse.

But I’ll know that in a week or maybe two tops. These backup sets are measured in terabytes instead of tens of gigabytes.

But just as a generic reminder, always always test your backups regularly. With full recovery. Otherwise when you need your backups, well, it’s highly likely that there’s nothing to restore.

Also one thing I’m going to do, is install the latest version on all sources. Because that could be meaningful thing.

And if there’s something good about this. At least the full restore error reporting works. It could be worse, if it would say ok, when everything isn’t actually ok. Also the final hash verification is good. When Duplicati says backup is successfully restored, it hasn’t been ever broken.

I suspect you are doing “direct restore from backup files”? (Which of course is the best type of test for a DR situation, not relying on the local database.) When you do this Duplicati does have to build a temporary database, and my guess is you are hit by a bug where bad dindex files cause dblocks to have to be read in order to create that temp database. Same issue when you aren’t doing a restore test but instead just recreating the local database.

There is a fix for this (at least in my experience) - a way to regenerate those “bad” dindex files. But it requires a functioning database first. If you have the time and inclination it may be an interesting test. After the dindex files are regenerated, I am betting a direct restore from backup files will work much more quickly (at least the temp database creation phase).

I guess this is wrong thread for this discussion, but just saying that these are age old problems with Duplicati which are still unresolved.

Slow recovery, code: 2

Remote file referenced as by, but not found in list, registering a missing remote file
Found 1 missing volumes; attempting to replace blocks from existing volumes

Totally fubared, code: 100

ErrorID: DatabaseIsBrokenConsiderPurge
Recreated database has missing blocks and 8 broken filelists. Consider using “list-broken-files” and “purge-broken-files” to purge broken data from the remote store and the database.

I’m sure there are better threads for this topic… So let’s not continue here…

Anyway, when backups are systemically and repeatedly corrupted, it’s not a good sign for any program. Especially not a backup program.

Edit: Linked to this thread: Backup valid, but still unrestorable? Issue persists, after three years.

I’ll confirm one sample case, and update that thread accordingly.

Hi to all!

Since the assumption of this thread, a lot of time passed.
I would like to ask you for a brief opinion:

  1. Is Duplicati stable and trustworthy for you after such a long time?
  2. Are there any magical problems that the most terrifying is “Duplicati claims that everything is fine, and suddenly turns out that the backup is damaged”?
  3. Does anyone changed Duplicati to another solution?

I know that there is still a Beta status, but some posts sound, as if Duplicati was in the pre-pre-alpha phase :frowning:

Personally, I need to use it on several Ubuntu machines and I wonder if it can work better in the Mono environment or worse than Windows?

The entire Duplicati project seems to have a great idea, brilliant design bases (whitepaper), fantastic functions … but there was no programming power :frowning: (I am not talking about the competences of the creators, only about the number of programmers).
I’m sorry, but I have the impression and I hope I’m wrong.

Your opinions are important to me because I need a production solution. Currently, I am looking for a solution for Ubuntu, which is Open Source + has a GUI + deduplication + incremental copies + encryption).
Consider between Duplicati or Vorta (GUI for Borg) or a


Hello and welcome to the forum!

Many people use it and find it reliable, but some people do have issues. (The posts on this forum are of course by people looking for support with issues. You don’t really see people post very often that DON’T have issues.)

There are also pending known issues that still need fixing of course. Whether or not those affect you depends on your use case. For me the pending issues aren’t a problem.

You’re not wrong, it seems like we don’t have many people actively working on development. Volunteers are always welcome.

There are a lot of options available. I suggest you try several out and compare. That’s what I did when I stopped using my previous backup solution (CrashPlan).

Good luck on your journey!

Look at the dates on the posts in this topic. Personally, I had problems before but debug/fix efforts prevailed on many of them. There are some unfixed, maybe due to the developer shortage. Others are rare, and poorly understood because they’re rare. Reproducible test cases can help immensely.

Good practices for well-maintained backups has things you can do to attempt to keep things happy and fast.

For a sense of scale, shows 4 million backups/month. Not all post. :wink:

Keep multiple backups of critical data, maybe by different programs. Reliability varies but none are perfect.
If you’d consider a licensed closed source GUI on top of an open source engine, some folks like Duplicacy. Comparisons are hard due to different user base sizes and uses, so personal evaluations become helpful.

1 Like

It’s a bit sad that there are no developers willing to help :frowning:
Maybe you need to take care of marketing more, collect more subsidies to pay helpers, etc …? I just mean make it louder about Duplicati in the world :wink:

Do you have any experience with Vorta/Borg or Kopia?
I wonder if the Duplicati + Vorta (or Kopia) scenario is a good idea…

That’s why I’m reheating the old thread to ask if the authors of the negative posts have solved their Duplicati problems or changed the backup application

This is a bit like judging the reliability of cars. When you walk into a Bentley or Rolls-Royce workshop, you’ll think “oh my God, so many broken Bentleys! = I won’t buy a Bentley”:wink:

Here, however, I have the impression that the data does not contain backups that cannot be restored (because they are broken, but you don’t know about it)

I am only considering FOSS solutions.
And they don’t even have to Free (I always send a donation because I consider it fair-play). But they must be Open Source.

You can try, but the forum doesn’t email all past posters automatically, so you rely on right people noticing.

I wish there was a restore stat of any sort, but there isn’t. My assumption is it’s worked well enough to keep using it to backup. Couple that with an assumption that failure gets posted here, and try to draw inferences.

Generally before getting to the point where it’s concluded that a backup cannot be restored, we lead people through Disaster Recovery and if all else fails there’s the Duplicati.CommandLine.RecoveryTool.exe which gets run (or even mentioned) quite rarely given the number of backups. You can search for uses yourself…

This does not mean that recovering before that point is easy – it can be difficult and take forum assistance.

I opened this thread, thus I should probably answer too.
I installed Duplicati 2 on two windows systems to create online backups to an off-site location via scp.
One user has stopped using it, mainly because it was using to much (well, all off the little available) internet upstream bandwidth and online TV was suffering from this. Clearly not a problem of Duplicati 2, I assume other online backup solutions would have similar problems :-).
The other system is running with an “acceptable” failure rate. I fails to backup every few months and needs then manual attention (recreate local database, IIRC).
To be honest, I did not try a restore since some time, but I will do so now, as I have the backup disk right here by coincidence (I never considered restore via home DSL connections, my plan was always to go to the backup site in case of a failure).
UPDATE Restore worked, not super fast, but it worked, as far as I can tell.

This is exactly one of the pain points. But even worse is the silent corruption.

Did you do the restore test right? Ie. without the local database, as example to another computer. One of the mail rage points is that the test works, as long as you have the original data, because it can recover using local blocks… It’s just like the compression trojan horse apps,. which compress data very efficiently. And it works perfectly, until you’ll delete the original files. Yep, it didn’t compress anything at all, it just referenced the original files. - I’m actually very worried that people will find out this the hard way. The issue isn’t probably technically big, as I’ve posted in the another post. But it’s still total show stopper.

In my case, I will create a fresh full backup for the simple reason that the backup medium is reporting uncorrectable sectors. I’m not taking any risk there.

Backup was done on a windows computer and saved to a remote machine using SCP. For maintenance reasons, I have this “remote” machine currently at my home and used this to perform a restore test.
For the restore test, I installed Duplicati on my linux Desktop and did a full restore of a backup point about 1 month in the past. Data was transferred on a local LAN, speed was not that great, a bit more than 10MByte/sec. CPU-Usage on my linux computer was about 300% on an old Opteron with 8 Threads. I was positively surprised by the fact that it is multi-threaded :slight_smile:

1 Like

Any idea why it drastically decreased compared to 2 months ago ? Any FOTM tool (ie Kopia) or more precision about this ? Thank you !

I’m not seeing that. The monthly graph by compression is nice because almost all is in .zip, so there’s no need to total the categories manually. This went up a bit from 3.99 to 4.31 million from 2021-11 to 2022-01.

One anomaly is the monthly by operating system, where Linux dropped in 2021-10. My guess is this if from Old Let’s Encrypt Root Certificate Expiration and OpenSSL 1.0.2, so inability to return some usage reports.

Because client-side fixes can be hard, I suggested a server certificate change. I haven’t heard it being done. should be the code. I know there are details the graphs don’t get.

I don’t follow the “FOTM tool (ie Kopia)” comment, but I don’t have access to server or any further precision.

Update, finally. I’ve reset ALL backup sets and updated ALL clients to latest version. I’ll now run full restore test on ALL systems. After that I’ll keep running automated restore tests as normal. Then we’ll see if things are broken, or not. Some of the older backup sets could have been partially broken for a very very long time, even if restore worked. As I said, it was obviously bad code with earlier versions.

Yet even with the latest version the restore sometimes is absolutely ridiculously slow. There’s a very bad code smell at least 10 miles far. But that might have been caused by the older versions leaving some gunk or broken files in the directories.

As I said, restore often gave code 2. Which mean that the backup was broken, but it was still able to restore it. Which obviously shouldn’t happen.

I hope you had adjusted blocksize on large backups to keep blocks in the backup below a few million blocks. It makes big difference to database performance. Blocksize change does require a fresh start.

I first thought this followed on from the paragraph above it, but now I’m thinking it’s the earlier history, especially since first paragraph spoke in future tense. Third paragraph might also be on prior results?

I can test the theory if you like, but it looks like it doesn’t even take an error, merely a warning, to get 2. Personally, I think Duplicati sometimes over-warns, but if you keep seeing these, what warning is this?

Yes, I sometimes get warnings on restores merely because I have the destination folder open in an Explorer window.

–dblock-size=“1GB” --blocksize=“64MB”

The ridiculous / insanely slow restore usually starts with “Registering missing blocks”… And then you know you can throw out all wishful thinking about RTO out of the window. Because it can take well, days or weeks as mentioned here earlier.

Related the the previous statement, starts with registering missing blocks … And if you’re able to wait through the code smell and insanely slow single threaded code (which btw is NOT limited by I/O just burning CPU cycles) then you’ll end up with code 2, if you’re lucky. But Code 100 is also likely result and restore failure after that.

It’s not over warning, if it’s up to random change if backup restore will ever complete / be successful.

But let’s hope the newer version is less likely to constantly corrupt data. We’ll be seeing. As mentioned. Today I’m testing all the backups and confirming that the backup sets are now all good. Then I’ll just keep monitoring the failure rate. If there’s the classic Duplicati silent backup set corruption problem, I’ve been so frustrated with. Hopefully the latest canary version does handle the situation better. Also with compact. At least I’m hopeful because now it seems to at least at times, to start recovery correctly if aborted. As well as do some recovery steps which were missing earlier.

Ref: FTP delete not atomic / verified / transactional (?)

I’ve also updated absolutely all related software, Duplicati, .NET, Windows versions, back-end server software, absolutely all possible related things and so on.