Duplicati did not like upgrade to Server 2022

Have a Windows file server that was running Server 2019 and Duplicati 2.0.6.100_canary_2021-08-11, which has been running my backups fine for several years. Yesterday I upgraded the file server to Server 2022 and last night’s backups just went crazy.

RAM usage went to maximum, the VM had 4GB which has been more than fine for years, and even when I gave it more, up to 10GB, it just continued to swallow it up. When I cancel the jobs the RAM falls back to normal levels of around 1.5GB

The other thing is the crazy display through the GUI

I don’t have that much storage, just 8TB with around 5TB of data.

I appreciate I’m running the canary build with all that entails, but I did not expect this to happen seeing as two workstations I upgraded from Win10 to Win11 running the same build of Duplicati haven’t had any problems.

I have a feeling I’m facing a total loss of all the backups now and file versions going back several years. Doesn’t help that one set of backups is in Wasabi and what to do there.

Any ideas what could be wrong? I haven’t seen any updates to the canary for such a long time, is it still the latest or have the beta/release overtaken it?

I wonder if Duplicati had an issue enumerating your filesystem. It looks like it may have been looping and counting files multiple times. Have you customized the --symlink-policy setting?

Not that I can see, I checked the main advanced settings and the same for the job itself and that parameter is not mentioned.

Currently I’m running a delete and repair on the database of the local backup to a file share, which is taking some time, so will see how it goes after that. If it’s ok I will do the same for the Wasabi backup job that also failed on the same server.

Do you run as Windows service? Is it default SYSTEM profile or did you move Duplicati config?
Typically not doing so produces a dead Duplicati, but I want to be sure it’s not a possible cause.
Beyond that I have no guess why Server 2022 is working so very differently than Server 2019…

Duplicati says I have 20x more data than actual is long-shot relevant, maybe sparse file or bug.

You could note time, test subset backups, and delete those versions when/if bad area is found.
Maybe also turn off auto-compacts and retention deletions while doing chopped-down backups.
You don’t want one accidentally being picked as the backup-of-the-year that you keep 10 years.

If you don’t want to mess with the real backup, you can do test backups, but it might be slower.
Cutting down a backup probably won’t upload much, just basically note some files are deleted.
Actual deletion of file data from destination won’t occur, assuming file is in some other version.

You can watch About → Show log → Live → Verbose. Maybe you can match files to size jump.
Looping or other bad enumeration patterns might also be easier to spot. If it helps, log to file by
log-file=<path> log-file-log-level=verbose.

Yeah, it’s a service with the correct path to the profile - after I upgraded I did initially forget to sort this, but as I have done in the past I cleaned up and got that done.

I ran a couple of scans of the drives involved, could not see any weird symlinks and all the files added up correctly.

The repair finally finished last, 2 warnings and 2 errors, though it didn’t help it was battling against a defrag as I had forgotten to disable the scheduled task for that, also sorted. Currently trying out the backup while monitoring it closely and waiting the result.

I think I chose a bad week to start my upgrades to Windows Server 2022 as it seems Microsoft have messed up this week’s updates for it e.g. causes domain controllers to boot loop, and was what I was going to upgrade next. Think I’ll wait.

We’ve heard of some weird huge negative (unlike your huge positive) size totals for unknown reasons.

File size remaining goes negative has some discussions and might also give ideas on how to monitor.

If you actually have a file that reads back huge, you might be able to see its name in file progress area.

Post them if you like. Errors are more serious than warnings. Was the repair recreating the database?
I would have thought that enough of an error would block database use until whatever it was was fixed.

Yes, it was a recreate database that I ran, and these are the warnining/errors it reported

Warnings 2 
2022-01-13 19:17:29 +01 - [Warning-Duplicati.Library.Main.Database.LocalRecreateDatabase-MissingVolumesDetected]: Found 2 missing volumes; attempting to replace blocks from existing volumes
2022-01-13 22:31:19 +01 - [Warning-Duplicati.Library.Main.Database.LocalRecreateDatabase-MissingVolumesDetected]: Found 2 missing volumes; attempting to replace blocks from existing volumes
Errors 2 
2022-01-13 18:21:48 +01 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b0fa9f3fcc3b2457ba5a26ab4eb4fa59d.dblock.zip.aes by duplicati-i43b7e5cf5295432cb274b87674c9e13f.dindex.zip.aes, but not found in list, registering a missing remote file
2022-01-13 18:59:41 +01 - [Error-Duplicati.Library.Main.Operation.RecreateDatabaseHandler-MissingFileDetected]: Remote file referenced as duplicati-b5a6d488e4fe244daa53d88676a45f36a.dblock.zip.aes by duplicati-if2d238896f964832a692bdf5f020b36b.dindex.zip.aes, but not found in list, registering a missing remote file

After all this everything seems to be fine now - once the backups completed the file count/size returned to normal. Subsequent scheduled runs have been fine as well.

The Warnings lead me to code involving temporary files that looks now (maybe was better before).

Ignoring that (it’s Warnings), Errors then a success lead me to think you might be fine, but can run:

The AFFECTED command naming the two mentioned dblock files

Maybe the same check is possible by

The LIST-BROKEN-FILES command

You can run those in Commandline after adjusting at least Command and Commandline arguments

An AFFECTED gave me “No files are affected” and LIST-BROKEN-FILES “No filesets were recorded as broken”

1 Like

So I think I found the cause of the failures just not the reason why the cause is happening. It’s memory, with 6GB free out of 10GB when a backup runs the memory usage hits the 10GB and starts a massive amount of swapping to the pagefile.

Well it it’s the pagefile now because for some reason the pagefile of the VM had been disabled so when it first happened it royally screwed up the back amongst other things. With the pagefile back in action, it’s still going crazy, I’ve seen it go up to 14GB, but hasn’t trashed anything else. The worst thing is that when it happens it’s almost impossible to access the server to get a clear picture of what process, but the few times I did, nothing was showing in either task manager or resource monitor.

Any idea why Duplicati is suddenly using so much RAM, is there a good way to control it’s usage so it releases it more quickly? I just gave the VM an extra 4GB and within a minute it was pushing past that.

Suddenly meaning same Duplicati, different OS? No ideas. If you’re on OneDrive or similar, there’s an investigation underway, but the original issue was filed on 2.0.5.114 so whatever it isn’t isn’t that new…

I suppose, if you want, you could post an Export As Command-line with sensitive information redacted.
Possibly some configuration is memory-hungry, .e.g. use-block-cache might be for a large block count.

Just an update as I haven’t forgotten about it - I found out a possible cause in my case, but I need to confirm something and for that I’ve been waiting a few days for a database rebuild of my Wasabi based backup.

But the memory issue is resolved and was as suspected due to using block-cache which was never meant to be enabled. I think when I rebuilt the server last year and created the backup jobs I chose the wrong option when I wanted to enable “use-background-io-priority” - all my other servers have that enabled.

1 Like