Disk Image Malformed Database after Upgrade

I just did the 2.0.5.1 beta upgrade from the previous upgrade. I have a small backup and a large one with databases that share the same .config/Duplicati directory .on Ubuntu. The small backup after the upgrade suceeded, the large one failed with “Malformed Database”, which was apparently a result of running out of space upgrading the large database. I went and expanded the partition, but it sill fails immediately with the malformed message. Whats sitting in the Duplicati directory is

-rw------- 1 km km  1217400832 Jan 24 02:06  76768675707384858884.sqlite
-rw------- 1 km km      139264 Oct 19 13:20  82757566777677678989.sqlite
-rw------- 1 km km 13886705664 Jan 24 02:30  87789071907784826689.sqlite
-rw------- 1 km km      122880 Oct 19 13:00 'backup 20191019010001.sqlite'
-rw------- 1 km km      122880 Oct 19 13:44 'backup 20191019014417.sqlite'
-rw------- 1 km km      122880 Oct 19 17:23 'backup 20191019052325.sqlite'
-rw-r--r-- 1 km km       76800 Oct 19 10:06 'backup 20191019101402.sqlite'
-rw------- 1 km km  1149227008 Jan 23 10:19 'backup 76768675707384858884 20200123102050.sqlite'
-rw------- 1 km km 13886705664 Jan 24 02:30 'backup 87789071907784826689 20200124022500.sqlite'
drwxrwxr-x 2 km km        4096 Jan 23 22:18  control_dir_v2
-rw-rw-r-- 1 km km         238 Dec 16 20:42  dbconfig.json
-rw-r--r-- 1 km km      126976 Jan 24 09:09  Duplicati-server.sqlite
-rw-r--r-- 1 km km      141312 Oct 19 09:54  GFAUEDNDGM.sqlite
drwxrwxr-x 3 km km        4096 Jan 23 22:18  updates

which includes two copies of the 13gb database (shich are identical by checksum). There is now 51Gb available in the expanded partition.

How do I recover?

I was going to suggest deleting 87789071907784826689.sqlite and then renaming 'backup 87789071907784826689 20200124022500.sqlite' to 87789071907784826689.sqlite, but if those two 13GB files have are identical than it won’t help.

One option may be to do a full database recreation. Hopefully this process won’t take too long since you are now running 2.0.5.1. Do you want to try that? In the Duplicati Web UI, click your larger backup set, then click Database…, then click the “Recreate (delete and repair)” button. It may take a while.

When I originally created it, it took about two weeks, and multiple times it got stuck and had to restart taking sometimes a day just to catch up with where it had left off before it actually did more copying. I’m going to google drive. The daily’s take a few hours.

I did have the foresight to copy the entire .config/duplicati off to another drive about 1 days backup old before the update. I had planned it to be exactly before the update, but duplicati insisted it couldn’t activate the upgrade until after it did some pending backups.

Would I be better off moving the two big sqlite files out of the filesystem and substituting the corresponding one from a day or so ago, and trying again? Is there more to it then just substituting the single sqlite file?

If you have a backup from a day or so ago, you can try restoring that version. When you try your first backup, you’ll probably be prompted to run a Repair because the back end data will not be consistent with an older sqlite database.

Honestly if it were me I’d probably try the Recreate option first. It has improved drastically compared to the older beta version.

Well I went with the recreate option as suggested. I should explain that this database/backup represent 100K files that total 5TB of space.

The recreate took 20 hours. Then it did a backup that took 48 hours in which it locally read all 100K files, with minimal contacts to the remote.

It then did 2 unanticipated backups of a bit more than 3 hours each. I guess they must have been dailys that were queued up while the 3 days of recreate/backup were in progress.

It did its first regularly scheduled daily this morning, again about 3 hours. It can sometimes take an extra couple of hours depending on how much change there has been that day.

I have to wonder if I had used the day old copy of .config/Duplicati just prior to the upgrade, if it would have been a lot faster than doing the recreation.

It has been my experience that after a database recreate, Duplicati forces full reprocessing of all files, probably as a safety measure. It will take a while but ultimately doesn’t upload new blocks of file data (assuming actual file contents didn’t change).

Also yes, some jobs can be queued up after a long-running operation (like a database recreate). So you may see backups trigger immediately after a recreation completes.

Maybe. What’s puzzling is you shouldn’t HAVE to make a backup before upgrades. Duplicati does that itself when upgrading to a new version if the database structure/schema has changed. Not exactly sure what happened in your case.

sounds like possibly it was to try to play it safe in case something went wrong. But using it has perils:

Repair command deletes remote files for new backups #3416
and if compact has run, the new files in the remote will be even less recognizable to the obsolete DB.

I think this is another of the timestamp precision issues. The remote dlist uses one second resolution, whereas the database uses whatever local filesystem (and in some cases, version of mono) will give, meaning all files on some systems will look a fraction of a second newer, and get scanned to be sure.

Watching About → Show log → Live → Verbose may show a lot of lines going by that look like this:

2020-01-28 11:48:11 -05 - [Verbose-Duplicati.Library.Main.Operation.Backup.FilePreFilterProcess.FileEntry-CheckFileForChanges]: Checking file for changes C:\pathname, new: False, timestamp changed: True, size changed: False, metadatachanged: False, 12/25/2019 10:17:20 PM vs 12/25/2019 10:17:20 PM

If you really want, you can open the database and see FilesetEntry.Lastmodified (100 ns resolution) looking nice and even after Recreate, but after Backup it’s a fraction of a second higher than before.

This won’t occur on ext3 filesystem which has one second resolution, but several FSs can do better.
I think ls -l --full-time and stat will reveal fractional seconds, if you’re unsure if they’re there.

Yes, I was being cautious putting my own copy on a different filesystem. As it happens that made sense since the “activate” triggered the backup in the same filesystem which did not have enough space. I wasn’t aware that it was going to do that, but should have guessed.

My cautious plan could have worked. I did the copy, then upgrade, and then activate. However, the activate wouldn’t run since Duplicati had planned a backup that it wanted to complete first. That left me with a 1 day out of date copy.

I should have freshend the copy, but it takes a while and at that point I didn’t know there was going to be an issue. I jjust did the activate that failed for lack of space for a backup of the db.

There is one mystery in my mind. After the failed due to space activate I was left with two identical 13GB sqlite dbs, one the backup, both bad. The one I saved was 19GB and the one eventually recreated was also 19GB. I would have expected a corrupted 13 and a good backup 19. How they both got to be identical corrupted 13’s I don’t understand.

A long story, but just to recap the ulitmate success. I grew the filesystem, did the 20 hour recreate, followed by the 48 hour backup (rereading all 5TB’s locally), and all is well.

I presume there is no reason to retain the bad 13GB backup.

Glad your problem is resolved! No, there should be no reason to keep the bad database backups.