macOS: After downloading and installing the offered beta 184.108.40.206_beta_2019-07-14 (via the web interface), the first run of my backup says: Unexpected difference in fileset version 11: 3/25/2019 9:28:51 AM (database id: 119), found 221452 entries, but expected 221453
Repair database gave ‘1 warning’ (not shown) and doesn’t help
After ‘Recreate’, I got: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.
The best way to recover from this (in my experience) is to just delete the offending backup version - no need to repair or recreate the database, and you only lose that one backup version.
In the Web UI, click on the backup set and then click “Commandline…”. Choose “Delete” from the dropdown Command list. Then scroll to the bottom and from the Add Advanced Option dropdown, choose “Version”. Type in “11” then click the Run button.
Edit: By the way, it is believed that this bug has been fixed in newer versions of Duplicati - but it hasn’t made it to the Beta channel yet.
I’m returning back to this after a few months being to busy with other priorities (I could live without this backup running as it is experimental/testing and my other machines/users don’t have the problems (they have other Duplicati hiccups though but sofar with successful repairs if need be).
Because I forgot where I was, I did a delete and rebuild again. That is still running (runs for many hours using 1-2 cores on my Core i7) and will probably fail again after many hours chugging along.
So, then I need to try the “Version 11” thing suggested above, Now, when that fails, is my backup completely hosed? All my history lost? Just throw everything away and forget about it? Because if that is the case, I’m starting to wonder if using Duplicati is really a good idea. I have some family members using it (some still on 220.127.116.11 beta because I stopped updating them to a newer version when I ran into this issue). A completely hosed backup is something that is so fundamentally wrong that I need to reconsider using Duplicati. I like the approach, I like the community, but I’m getting worried because of this issue and the other hiccups.
I’m a bit confused - is the database recreate still running? Yet you tried to delete a specific backup version? Or are you speaking in hypothetical terms here?
The database recreation process taking a long time may be due to a bug that has been fixed but not yet available in the beta channel. Same with the bug about ‘unexpected difference in fileset’ - not available in the beta channel yet.
If you are willing, I might suggest you run the latest canary 18.104.22.168 on this machine where you have a database recreation issue. See if it recreates the database more quickly. You could continue to use it until the next beta version is released, then switch back to that channel.
In my experience, the latest canary versions are better than the 2.0.4.x beta releases. I’m hoping a new beta will come out soon.
The database recreate fails with “The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.”. This time, maybe the fact that the computer went to sleep while repairing may have been stopping the recreate process, I don’t know and I’m loathe to try again because the whole process takes 30 hours or so.
Every time I try to solve my problem, it takes ages and it fails. I am confused as well and the backup hasn’t been running for months now, because (a) I don’t know what to do and (b) whatever is suggested does not work. The database has been deleted (again, that is the suggestion by Duplicati itself, if repair does not work, try delete and repair —I guess that means “delete and recreate from the backup”), so I guess it is gone.
Anyway, I might try the canary release. No going back from that as well, I know, hence it is another suggestion without a roll back. What I should have done in the first place is save the database so I could go back to an earlier version (maybe Duplicati should be very careful about actions that cannot be rolled back and always create a saved state for that, e.g. save a copy of the database before proceeding and rolling it back when some repair fails so that another attempt can be made from the original situation). Maybe.
At this stage, I’m starting to believe I have lost my entire backup, all the history in it etc. And I’m wondering how I will have to remove the actual index and data files from the S3-compatible storage. And what to use for off site backup by friends who back up to my S3-compatible storage. I can’t let them use something that is unreliable. But there are no really good solutions for macOS.
Duplicati does exactly that. Before each upgrade of the database, it makes a copy with a timestamp in the filename, and places it in the same folder as the database. This allows you to roll back without any hassle (unless you start making backups with the new database and then regret later).
This has been one of the main development issues too, along with too few volunteers in all areas. Probably there are also some process issues as well. Regardless, Beta code base is a year old… Numerous fixes to prevent backup breakages (and allow fast Recreate if need be) are not in Beta.
Eventually new Beta should come. 22.214.171.124 canary is probably very good, although also very new.
Major area of worry to me personally is that the Stop button change in 126.96.36.199 adds lots of issues.
For your experimental/testing machine, a more potentially surprising release like a canary might fit, however if you’re using it as a proxy for family backup (including fixes), sharing pain might be best.
Generally my advice on reliability is that Beta code is, pretty much by definition, still not fully stable, meaning it should not be the only backup for anything you’d seriously hate to lose. Restarting fresh sometimes is the best path for future backups, however there are numerous levels of recovery from problems if restores are what you want. Duplicati.CommandLine.RecoveryTool.exe uses a different restore method than the usual GUI or CommandLine restores, and is more tolerant of any problems. Beyond that, there’s a Python script mainly intended to have no dependency at all on Duplicati code:
If you still have files on the destination, you might be able to recover your backup, but whether or not you can continue will depend on test. The recent Canary Recreate is usually vastly faster than before.
It can’t overcome all possible errors though, and only more test will find out how healthy destination is.
You can also get a preview of whether or not it will work by running Canary on some system to try the direct restore. If upgrading a system on Duplicati now, downgrading can be harder, due to DB format changes. This is where the backup files mentioned can come in handy. Upgrade via GUI then a quick downgrade by manual removal of the upgrade (and use of a backup DB if needed) if you like will work.
So, what is a version I should migrate to? I am running a mix of beta and even older beta on family member systems, but reliability is poor. One family member accidentally upgraded to macOS Catalina.
I probably need to install a new Canary release, and frankly, using that for backups makes my neck hairs stand up. But if I bite that bullet, which one should I use and is upgrading from older betas a smooth ride?
None are perfect, but neither is beta. Release: 188.8.131.52 (canary) 2019-11-05 has the best Catalina support and is looking good after two weeks except for GUI stop button somewhat-hidden damage. Safest plan is to just not use the stop button. It was broken in current beta too, but fix added issues.
Upgrades are generally a smooth ride, but see earlier post about downgrade difficulty. Start slowly, testing less important backups first. Per-backup files are (I think) not updated until you use backup.
If you think this Canary is doing well. you can change Settings back to Beta channel to avoid worse Canary releases (or they could be better – one never knows). Keep an eye on releases coming out.
If you want to give specifics, I might be able to point to specific fixes that Canary has but Beta lacks. There is currently a lot of desire for another Beta, after it’s determined what to do about stop button.
Although I don’t know what other things you’re seeing, a big one for me (and your original post) was:
OK. I installed canary 184.108.40.206 and tried recreating the database again (from the web GUI, have never really investigated running cli on macOS). This failed in the same way after it tried rebuild the database. I get popups about warnings, these warnings aren’t shown.
I have now deleted the offending version 11 (as indicated by the log messages of the beta, the canary doesn’t show any log messages it seems) and am now again recreating. Will take another day or so. The Web UI still says there are 16 versions.
Additional question just so i am certain. If version 11 was damaged, does that mean there has been bit rot on the storage side? Or was this a result of compacting?
There is additional logging (Duplicati-wide) in ‘About’. There I see:
Duplicati.Library.Interface.UserInformationException: Recreated database has missing blocks and 1 broken filelists. Consider using "list-broken-files" and "purge-broken-files" to purge broken data from the remote store and the database.
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, System.Boolean updating, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x0146f] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run (System.String path, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x00037] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal (Duplicati.Library.Utility.IFilter filter) [0x000ba] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Library.Main.Operation.RepairHandler.Run (Duplicati.Library.Utility.IFilter filter) [0x00012] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Library.Main.Controller+<>c__DisplayClass18_0.<Repair>b__0 (Duplicati.Library.Main.RepairResults result) [0x0001c] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Library.Main.Controller.RunAction[T] (T result, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x00007] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Library.Main.Controller.Repair (Duplicati.Library.Utility.IFilter filter) [0x0001a] in <759bd83d98134a149cdf84e129a07d38>:0
at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x003ad] in <63a01150aadd4a64a4d7c359bdc1e45d>:0
This gives me an option when the next repair fails in a while (after having done the repair which will take another day or so)
Progress! With 220.127.116.11_canary and after deleting Version 11 through the commandline interface of the Web UI I was able to backup again. I’m back to 8 versions (from 16) and backup retention logged the deletion of one version, so 16-2=8?
Unexpected difference is database consistency failure. Remote bit rot wouldn’t immediately change DB. Best indication of issue being caused by compact is if your job log just before issue shows compact ran, however logs do not survive database Recreate. If you have email or other logs, you could look at them.
I don’t know about that version count discrepancy. Generally, every dlist file on the remote represents a version, and the filename itself is about when the version should be dated, e.g. on a Restore dropdown.
Best verification of backup (if you have another computer) is direct restore, otherwise you can keep the computer making the backup from optimizing speed by using local file data by setting –no-local-blocks.
Regardless, it’s good to hear of progress, and if your experience is like mine, this version should be far less likely to generate “Unexpected difference in fileset” at seemingly random times due to the compact.