Repair fails, database recreate takes a ridiculously long time

One of my three backups started saying 3 files were missing from the database, and to do a repair. I did and it then said one file was missing. Repeated repairs failed to solve this error (and didn’t report which file it was or even any more detailed error as far as I could find), so I foolishly tried to recreate the database.

After running for 24 hours, it was only 15% of the way through. It’s going to run for a week! In the meantime the other two backups are blocked, so my files are at risk for a week because of this. How can it possibly take a week to reindex the files, that’s just absurd? What on earth is it doing that would take so long?

The backup is about 350GB in 421 versions (that’s after about 7 weeks of use). It’s on a local USB disk, so it’s not like it has to go out over the network to look at the backup.

And why would it lose track of some files in the first place? Is that a bug (it must be surely); if it is expected behaviour then it is alarming and, at this rate, effectively unrecoverable. I’d do better to delete the backup and start over with that one.

In the meantime I’ve had to abandon the re-create just so I get a backup done.

Same here and this is a BIG problem (that a lot of people have), in fact for me Duplicati was perfect until this issue, does’n have sense to have a backup tool that can’t restore the data, I’ve been waiting for four days and still the DB recreate isn’t completed and seems that will need another week, it’s ridiculous, I can’t do new backups and I can’t restore files (you know, Murphy’s Law, I need to restore two files…)

Nobody seems to know how to solve this, probably with small backups the speed is ok but when you have a lot o files and have this strange DB problem…you’re on troubles.

I redid the whole backup in the end, which took about 12 hours, around 12 x faster than rebuilding the database

A nice solution if you don’t need to restore something…remember that this could happens again and may be you need your data back, that’s my case, I can’t redid the backup because I need to get some files, as I said, I have a backup that can’t be restored until that process ends, and who knows how long it will be.

Indeed. That’s partly why I have three separate backups on interleaved schedules. I have one on a local disk, one on a LAN server, each running two-hourly, and one on backblaze for off-site which runs daily.

That’s a lot of versions - you must be doing hourly backups. My guess is that you’re running into a database performance issue due to all that history. If you’re using a newer canary version and don’t need ALL that history forever, consider using the newish --retention-policy parameter.

Do you happen to still have the original error? It’s possible Duplicati was complaining about files missing from the destination. If the missing files included dindex files then Duplicati would have to start downloading full archive files one at a time until it found the data needed to rebuild the database.

That’s a valid point, unfortunately at present I don’t think a rebuild can get interrupted then continued.

Do you happen to still have the original error?

The most recent one from after the first attempt at repair is still in the log:

Duplicati.Library.Interface.UserInformationException: Found 1 files that are missing from the remote storage, please run repair at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile) at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile) at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter) at Duplicati.Library.Main.Controller.<>c__DisplayClass16_0.<Backup>b__0(BackupResults result) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action 1 method) at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter) at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

The earlier one that got me started repairing differs only in that it said 3 files, not 1.

consider using the newish --retention-policy parameter

I’m on 2.0.2.1 which still seems to be the version available for mainstream download (I originally installed it in August).

I can’t see any evidence of an advanced setting called that. I would always want to keep at least two versions of any file, maybe 3, but I certainly don’t need more than that. (I would much rather base this on versions than time).

Thanks for the details. So somehow your destination seems to be missing a file. Unless somebody who already knows it jumps in, I need to check on how to move forward with resolving your error.

You are correct that 2.0.2.1 does not have that parameter on it, though it does support “keep this number of backups”, did you use that setting in your job setup?

In my case I have only 14 version so that’s not the problem.

So you have “a lot” of files? Some less optional SQL has been found gets very slow when lots of files (or possibly long paths) are involved.

Not that I’m saying that’s what’s going on for you, I’m just trying to exclude known issues.

The source has >800.000 files and about 500Gb, the destination has (14 versions) >10.000 files with data and index (50mb per volume). Also a lot of long file paths.

Anyway I’ve stopped the rebuild, eight days and was still running (should be nice more info than a progress bar), started a new backup and all seems fine (except one new issue with the command line), but I guess that this problem is going to appears again.

If a repair is needed again, then it might - assuming my theorized slow points haven’t been found and fixed by then.

If I recall correctly, there are 2 levels of repair - the normal level downloads the dindex files (small lists of what’s been backed up, but no actual file contents). If dindex files are missing, then the actual dblock files (in your case the 50MB files) have to be downloaded and scanned so that the local database can be rebuilt.

Note that a restore will work just find without the local database so your backed up data is as safe as your destination files - but backups still need it.

I have the same problems with incredibly slow database recreation.

I updated my MacOS. Not wanting to start over the whole backup and losing my file versions from the last year, I recreated the backup job (~400 GB) from an imported json file.

After starting the backup I got error messages of orphan files and files missing in backup. Thinking not a big deal, I followed the suggestions from the Duplicati app to delete and recreate the database. The process of recreation proceeded to about 70% completion in four hours and now is seemingly stuck. I checked the lastPgEvent under System Info. The entry appears to be changing every few hours or so when I check, but the progress bar is still at 70%. And 12 hours later still around 70% but the info under lastPgEvent keeps changing.

I don’t have much confidence anymore that Duplicati is a backup solution I can depend on. I really doubt that I will every be able to restore files after a serious hard drive crash without going through days of database recreation and possibly other failures.

Not to justify any issue in repair or recreate, and I know you want to keep your old file versions, but you don’t need database recreation to restore files should you suffer a hard drive crash. Restoring files from a backup gets into this lightly, and Disaster Recovery gets technical, including Restoring files using the Recovery Tool. Backing up the configuration is important because it’s kept locally and a broken disk could take it out as well.

How to test / verify your backups is good occasional practice, and a confidence booster (could be download hungry though), and one can also raise –backup-test-samples on automatic verification done after backups. One thing about both of these is that they focus more on how everything looked recently (including old files). The TEST command does have an option to verify old backup views, but I think you have to ask one-by-one. Sometimes I wonder if recreate trips over old errors. I’m not sure if The REPAIR command takes the –version option, but it might. Newest is probably 0, and I think ranges are accepted. If it works, it should also be faster.

The general topic of repair does not work (or not always, or something) was recently proposed as top priority, so possibly canary will see some more work on improvements (and at some point those will flow into the beta).

For the current slow situation, you might get some insight from About → Show log → Live (adjustable level), however it would probably be more to see what sort of activity is happening than to obtain a detailed analysis.

I thought I’d try a full recovery as if I had lost the disk completely. It’s currently about 400GB in all.

My first attempt failed as it took so long the next backup kicked in and after 8 hours(!), it complained there were unexpected files. OK, so that wouldn’t arise in the real situation. So I suspended backups on that destination.

Starting about noon on Thursday 4th, it then took around 8 hours to rebuild the database. I’m puzzled why you say “you don’t need database recreation to restore files should you suffer a hard drive crash”, as that is what recover does (or says it is doing, at least). There may be other methods, but if it doesn’t need it why does the primary restore do it? It actually said “Building partial temporary database… recreating database”, so maybe it didn’t do it completely, but it still took 8 hours.

Then it said “building list of files to restore” with a progress bar and then after 20 minutes said creating folders, and then 5 mins later “Scanning for local folders” with a progress bar (image 1). This bar never advanced for the remainder of the recover, but the text did change at some point to say “downloading files” (image 2). Again the bar never advanced though, so I had no idea how far it had got through.

On Saturday I checked the target disk, and it appeared to be complete, in that the size looked about right and every file I sampled was present and correct. However the backup didn’t say complete so I left it, and it remained as image 2 all through Saturday and all through Sunday. It had then changed to say complete (the donations message) by Monday morning (according to the messages it actually completed on Monday at 03:05).

So, in all it took 87 hours to complete the restore, with no useful progress information, and in particular I don’t know what it was doing for the last 40 hours or so after it looked like it was all restored (was it maybe verifying?).

This was from a network disk on a gigabit network where both ends were constrained by USB2, which in a pure file copy would be the limiting factor. 400GB over 87 hours means about 1.3MB/sec.

I then tried a Windows directory copy on a pretty typical 110MB folder containing 8,000 files (lots of small files take a lot longer to copy than big ones). This took 371 seconds or 0.296 MBytes/sec. By that measure 400GB (at 1GB = 1000MB) would have taken about 375 hours!

So on the face of it, while 87 hours sounds like a very long time, it looks like the restore took less than a simple file copy; but as I say, it seemed like it was substantially complete after some 48 hours, so it’s not orders of magnitude different, and if the database recreation didn’t take so long (couldn’t it be uploaded with the backups in the first place?) would be pretty good…

So my conclusions:

  • most importantly, it completed successfully!
  • it took a long time, but copying files also takes a long time, and it isn’t comparatively massive
  • it showed no progress in the progress bar, so I couldn’t tell how far it had got. This was the most difficult thing.
  • for the last half-ish of the time, it didn’t appear to be doing anything!

[1]

[2]

My comment referred to full database recreation, not the “partial temporary” which presumably can run faster.

The full database recreate is on the Datebase menu, which is also a place to go if Duplicati suggests a repair.