Cant complete backup or database compress

Backup wont complete deleting unwanted files or database compression won’t complete. I don’t understand the error messages or know what to do. Bug report too big to upload. Here is last set of error messages. Any help is welcome. thanks.

Aug 17, 2019 2:30 PM: The operation Compact has failed with error: Object reference not set to an instance of an object.
{“ClassName”:“System.NullReferenceException”,“Message”:“Object reference not set to an instance of an object.”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at SharpCompress.Readers.AbstractReader2.get_Entry()\r\n at SharpCompress.Readers.AbstractReader2.LoadStreamForReading(Stream stream)\r\n at Duplicati.Library.Compression.FileArchiveZip.LoadEntryTable()\r\n at Duplicati.Library.Compression.FileArchiveZip.GetEntry(String file)\r\n at Duplicati.Library.Compression.FileArchiveZip.OpenRead(String file)\r\n at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadManifests(Options options)\r\n at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)\r\n at Duplicati.Library.Main.Operation.CompactHandler.Run()\r\n at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action1 method)\r\n at Duplicati.Library.Main.Controller.Compact()\r\n at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":"8\nget_Entry\nSharpCompress, Version=0.18.2.0, Culture=neutral, PublicKeyToken=afb0a02973931d96\nSharpCompress.Readers.AbstractReader2\nTEntry get_Entry()",“HResult”:-2147467261,“Source”:“SharpCompress”,“WatsonBuckets”:null}

Aug 17, 2019 2:30 PM: Zip archive appears to have a broken Central Record Header, switching to stream mode
{“ClassName”:“System.NotSupportedException”,“Message”:“Unknown header: 1575336582”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at SharpCompress.Common.Zip.ZipHeaderFactory.ReadHeader(UInt32 headerBytes, BinaryReader reader, Boolean zip64)\r\n at SharpCompress.Common.Zip.SeekableZipHeaderFactory.d__3.MoveNext()\r\n at SharpCompress.Archives.Zip.ZipArchive.d__16.MoveNext()\r\n at SharpCompress.LazyReadOnlyCollection`1.LazyLoader.MoveNext()\r\n at Duplicati.Library.Compression.FileArchiveZip.LoadEntryTable()",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:“8\nReadHeader\nSharpCompress, Version=0.18.2.0, Culture=neutral, PublicKeyToken=afb0a02973931d96\nSharpCompress.Common.Zip.ZipHeaderFactory\nSharpCompress.Common.Zip.Headers.ZipHeader ReadHeader(UInt32, System.IO.BinaryReader, Boolean)”,“HResult”:-2146233067,“Source”:“SharpCompress”,“WatsonBuckets”:null}

  • Aug 17, 2019 2:30 PM: Backend event: Get - Completed: duplicati-bbd0d0656a09540118b0627b4d0ebb99f.dblock.zip.aes (15 GB)

  • Aug 17, 2019 2:20 PM: Backend event: Get - Started: duplicati-bbd0d0656a09540118b0627b4d0ebb99f.dblock.zip.aes (15 GB)

Did you mean to set your remote volume size to 15GB? Depending on the speed of your connection to where the backups are stored, this could really affect the speed of compact operations. 15GB seems way too large to me, unless maybe you are backing up terabytes of data (but even then I would not go that high - maybe 1GB tops - depending on speed of connection).

I’m not sure if that’s the root cause of your issues, but it jumped out at me as a potential problem. Note that the default is 50MB.

Thanks for the suggestion. I’ll give it a try. I’m just an uneducated user. I think set that four years ago when 15GB would have been the total size of my backup. Now I’m backing about 60GB but never considered changing the configuration settings. Speed or duration has not been an issue for me since I just let the backup run over night when nothing else is running on the laptop or network. Not sure I see the connection between the volume size and being able to delete unwanted files but I’ll try it.

Maybe you’re now hitting retention limits of your backups? Or for some reason auto compaction now thinks it should compact some of your remote dblock files. Having 15GB remote volume sizes will make that take a while.

You are allowed to change that setting, but it only affects new blocks. So it won’t immediately improve performance of your backups (assuming this is the root cause of your issues - which I am not sure about).