System.InvalidOperationException: Operation is not valid due to the current state of the object

When Duplicati backs up my server, it hits this error. This issue has happened two nights in a row, but this is the third time. The first time was a week ago. Usually, a manual backup works, but not this time.

I don’t understand the error trace or what I should do about it.

More detailed logs:

System.InvalidOperationException: Operation is not valid due to the current state of the object.
  at SharpCompress.Common.Zip.SeekableZipHeaderFactory.GetLocalHeader (System.IO.Stream stream, SharpCompress.Common.Zip.Headers.DirectoryEntryHeader directoryEntryHeader) [0x00038] in <5717dfb1db2745ffb30a27e1fee78b19>:0 
  at SharpCompress.Common.Zip.SeekableZipFilePart.LoadLocalHeader () [0x0001f] in <5717dfb1db2745ffb30a27e1fee78b19>:0 
  at SharpCompress.Common.Zip.SeekableZipFilePart.GetCompressedStream () [0x00008] in <5717dfb1db2745ffb30a27e1fee78b19>:0 
  at SharpCompress.Archives.Zip.ZipArchiveEntry.OpenEntryStream () [0x0000b] in <5717dfb1db2745ffb30a27e1fee78b19>:0 
  at Duplicati.Library.Compression.FileArchiveZip.OpenRead (System.String file) [0x0002b] in <37e8a9f7ecb043a8ad4da491e858cd61>:0 
  at Duplicati.Library.Main.Volumes.BlockVolumeReader.ReadBlock (System.String hash, System.Byte[] blockbuffer) [0x0000c] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Library.Main.Operation.CompactHandler.DoCompact (Duplicati.Library.Main.Database.LocalDeleteDatabase db, System.Boolean hasVerifiedBackend, System.Data.IDbTransaction& transaction, Duplicati.Library.Main.BackendManager sharedBackend) [0x00301] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Library.Main.Operation.DeleteHandler.DoRun (Duplicati.Library.Main.Database.LocalDeleteDatabase db, System.Data.IDbTransaction& transaction, System.Boolean hasVerifiedBackend, System.Boolean forceCompact, Duplicati.Library.Main.BackendManager sharedManager) [0x003a2] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired (Duplicati.Library.Main.BackendManager backend, System.Int64 lastVolumeSize) [0x000a5] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <9cab5b8f1b4f49ec980acae8e278968b>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <30a34d71126b48248d040dda634ddad9>:0 

The backend is OVH’s OpenStack.

Based on what I’ve observed from the webui (the logs do not have anything, and I didn’t run this from CLI), It seems to successfully backup the files, but it fails when uploading the data to the backend.

I exported the backup job and ran it from the terminal. Here are the results:

Backup started at 10/10/2023 2:09:30 AM
Checking remote backup ...
  Listing remote folder ...
Scanning local files ...
  0 files need to be examined (0 bytes)
  425 files need to be examined (7.17 GB)
  112 files need to be examined (7.13 GB)
  31 files need to be examined (309.58 MB)
  0 files need to be examined (0 bytes)
  Uploading file (8.31 MB) ...
  Uploading file (251.25 KB) ...
  Uploading file (50.92 KB) ...
  Deleting file duplicati-20231010T020117Z.dlist.zip.aes (50.92 KB) ...
  Downloading file (47.54 MB) ...
  Downloading file (49.95 MB) ...
  Downloading file (27.99 MB) ...
Fatal error => Operation is not valid due to the current state of the object.

System.InvalidOperationException: Operation is not valid due to the current state of the object.
  at SharpCompress.Common.Zip.SeekableZipHeaderFactory.GetLocalHeader (System.IO.Stream stream, SharpCompress.Common.Zip.Headers.DirectoryEntryHeader directoryEntryHeader) [0x00038] in <5717dfb1db2745ffb30a27e1fee78b19>:0
  at SharpCompress.Common.Zip.SeekableZipFilePart.LoadLocalHeader () [0x0001f] in <5717dfb1db2745ffb30a27e1fee78b19>:0
  at SharpCompress.Common.Zip.SeekableZipFilePart.GetCompressedStream () [0x00008] in <5717dfb1db2745ffb30a27e1fee78b19>:0
  at SharpCompress.Archives.Zip.ZipArchiveEntry.OpenEntryStream () [0x0000b] in <5717dfb1db2745ffb30a27e1fee78b19>:0
  at Duplicati.Library.Compression.FileArchiveZip.OpenRead (System.String file) [0x0002b] in <37e8a9f7ecb043a8ad4da491e858cd61>:0
  at Duplicati.Library.Main.Volumes.BlockVolumeReader.ReadBlock (System.String hash, System.Byte[] blockbuffer) [0x0000c] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.Library.Main.Operation.CompactHandler.DoCompact (Duplicati.Library.Main.Database.LocalDeleteDatabase db, System.Boolean hasVerifiedBackend, System.Data.IDbTransaction& transaction, Duplicati.Library.Main.BackendManager sharedBackend) [0x00301] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.Library.Main.Operation.DeleteHandler.DoRun (Duplicati.Library.Main.Database.LocalDeleteDatabase db, System.Data.IDbTransaction& transaction, System.Boolean hasVerifiedBackend, System.Boolean forceCompact, Duplicati.Library.Main.BackendManager sharedManager) [0x003a2] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired (Duplicati.Library.Main.BackendManager backend, System.Int64 lastVolumeSize) [0x000a5] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <9cab5b8f1b4f49ec980acae8e278968b>:0
  at Duplicati.CommandLine.Commands.Backup (System.IO.TextWriter outwriter, System.Action`1[T] setup, System.Collections.Generic.List`1[T] args, System.Collections.Generic.Dictionary`2[TKey,TValue] options, Duplicati.Library.Utility.IFilter filter) [0x00119] in <81fea8e154eb4fd9aa8b06aac66d311a>:0
  at (wrapper delegate-invoke) System.Func`6[System.IO.TextWriter,System.Action`1[Duplicati.Library.Main.Controller],System.Collections.Generic.List`1[System.String],System.Collections.Generic.Dictionary`2[System.String,System.String],Duplicati.Library.Utility.IFilter,System.Int32].invoke_TResult_T1_T2_T3_T4_T5(System.IO.TextWriter,System.Action`1<Duplicati.Library.Main.Controller>,System.Collections.Generic.List`1<string>,System.Collections.Generic.Dictionary`2<string, string>,Duplicati.Library.Utility.IFilter)
  at Duplicati.CommandLine.Program.ParseCommandLine (System.IO.TextWriter outwriter, System.Action`1[T] setup, System.Boolean& verboseErrors, System.String[] args) [0x00342] in <81fea8e154eb4fd9aa8b06aac66d311a>:0
  at Duplicati.CommandLine.Program.RunCommandLine (System.IO.TextWriter outwriter, System.IO.TextWriter errwriter, System.Action`1[T] setup, System.String[] args) [0x00002] in <81fea8e154eb4fd9aa8b06aac66d311a>:

I tried to repair the database, which was instantly successful. Then ran the command above again, but it resulted in the same error/output.

Hello

It don’t fail when uploading, it errors out when reading a file downloaded from your backend (when trying to compact due to prior deletions). It seems that one of your files on your backend has an incorrect format. You can check it by doing a test of your backend (enable full-block-verification and give ‘all’ as parameter).

If only index(es) file(s) are affected, you have lost no data, as the index files can always be rebuilt from the data files. It’s unlikely though, as in your trace the last file before an error is displayed is 28 MB.

If one of our data files is incorrect, you can then use affected command (don’t forget to remove all filters) to see what is the impact (if the affected files have been deleted on your system there is not much you can do if it’s your only backup of course, but as the error happens when trying to compact, it’s possible that affected files are supposed to be worthless anyway). Then try to use purge command to remove the problematic data files. You could also download the concerned files for diagnostic (uncrypt them manually with SharpAesCrypt.exe)

Finally if all goes well, maybe you could try to check your hardware because as the files are verified as good with aes encryption, the zip files have probably been generated wrong on your system.

the ‘instantly’ is a bit disturbing. Maybe try to run in in command line mode, while opening another tab and following the live log at verbose log level.

Sorry for the late response. I appreciate the help!

In the meantime, I got an alert that 3 files were missing from remote storage. I ran the repair command successfully. I tried the backup again, but it failed with the same error.

It seems that one of your files on your backend has an incorrect format. You can check it by doing a test of your backend (enable full-block-verification and give ‘all’ as parameter).

I tried running verification with --full-block-verification=true and got:

  Listing remote folder ...
  Downloading file (51.40 KB) ...
  Downloading file (58.12 KB) ...
  Downloading file (49.96 MB) ...
Examined 3 files and found no errors
Return code: 0

So I tried running the backup again…

Backup started at 10/16/2023 5:13:46 AM
Checking remote backup ...
  Listing remote folder ...
Scanning local files ...
  56 files need to be examined (5.82 GB) (still counting)
  424 files need to be examined (7.20 GB)
  112 files need to be examined (7.15 GB)
  31 files need to be examined (307.19 MB)
  Uploading file (9.56 MB) ...
  Uploading file (250.56 KB) ...
  Uploading file (51.11 KB) ...
  0 files need to be examined (0 bytes)
Checking remote backup ...
  Listing remote folder ...
Verifying remote backup ...
Remote backup verification completed
  Downloading file (51.11 KB) ...
  Downloading file (717.23 KB) ...
  Downloading file (13.79 MB) ...
  Duration of backup: 00:01:06
  Remote files: 809
  Remote size: 17.88 GB
  Total remote quota: 0 bytes
  Available remote quota: 0 bytes
  Files added: 0
  Files deleted: 0
  Files changed: 83
  Data uploaded: 9.85 MB
  Data downloaded: 14.54 MB
Backup completed successfully!
Return code: 0

And it worked this time…? I guess, problem solved?

if you do NOT pass ‘all’ as a parameter, only 3 files are verified. As these files are selected randomly, the test will not necessarily pick the damaged ones. Same thing for backup, by default only 3 files are verified.

I tried with --full-block-verification=all and got the same result, with a warning about syntax:

2023-10-16 05:10:49 +00 - [Warning-Duplicati.Library.Main.Controller-OptionValidationError]: The value "all" supplied to --full-block-verification does not parse into a valid boolean, this will be treated as if it was set to "true"

I’m using the commandline interface from the webui.

Ah, you mean --full-block-verification=true and then add all as an argument on the end. Gotcha. Sorry about that!

It’s downloading all the files to verify now.

After downloading the files for verification, it identified a few files that failed to verify properly:

duplicati-i4f3384af401b4a8ea97428487e55e431.dindex.zip.aes: 1327 errors
	Extra: +2dOvqizgK9kExzgSDwyo3E3iosSvb3op0IitS8cLys=
	... and 1317 more

duplicati-i53abe59b62bb4be79e39863cac6478b6.dindex.zip.aes: 1 errors
	Extra: MRp3uKZKTHIhFX+wxrVANpK62ZD0/LJjwXx96azDDpk=

duplicati-b57345cb10ce3432c89d8aaa2c20a7bdd.dblock.zip.aes: 1 errors
	Error: Operation is not valid due to the current state of the object.

duplicati-bf4e1ced060784a41a88128c4a7819ee6.dblock.zip.aes: 1 errors
	Extra: MRp3uKZKTHIhFX+wxrVANpK62ZD0/LJjwXx96azDDpk=

Return code: 3

I downloaded the dblock file with the Operation is not valid error, installed aescrypt, decrypted the file, and tried to unzip it:

Extraction to directory "duplicati-b57345cb10ce3432c89d8aaa2c20a7bdd.dblock" failed (3 files failed.)

The three errors:

Failed! (Attempted to read more data than was available)
Failed! (Attempted to read more data than was available)
Failed! (Error on decrunching)

I’m not really equipped to do diagnostic analysis on what has happened here, so I moved on to the AFFECTED command. Looks like 6 filesets going back a month (two weeks before I started getting the error). So, I tried to purge the dblock, but I couldn’t manage to do it. The only output I got was Listing remote folder....

I even tried --include=duplicati-b57345cb10ce3432c89d8aaa2c20a7bdd.dblock.zip.aes and --include="*/duplicati-b57345cb10ce3432c89d8aaa2c20a7bdd.dblock.zip.aes", but that didn’t work either. When I check the backend, I can still see the file. And the backup is still failing. I haven’t tried deleting the file manually because I assume Duplicati does something extra.

Additionally, list-broken-files reports nothing.

For some more context, this server’s hard drive was 100% full for some period of time a month or so ago, and it’s likely this impacted the backup.

Purge is not deleting remote files, it is removing logical files (that is, backup of your local files) from the backup. So you can grab the affected files (filesets) and purge them from the backup (since they could not be restored anyway).
To purge:

  • you have to remove the filters from the command line parameters (if any)
  • you can specify a range of versions (take that from affected command too)
  • you can specify dry-run to verify what you would do (do not forget that dry-run is by default false, so check it explicitly).

It may be that purge fails because it would remove whole filesets. You have to delete them then.
If all purge succeeds and affected does not list anything for the damaged block, you can remove it from the backend.

Just so I understand, purge is removing partial files/data that no longer exist both on my server nor in remote storage from my local .sqlite database? And once I remove that data, I should try backing up again, as full-block verification should work now that I’m no longer checking for the partial/corrupted data?

It’s quite possible that the files are still existing on your computer (server). What matters is that the deduplicated copy of your files on the backend has a corrupted block. If it’s corrupted, it’s lost. If you no longer have the original version of the affected files, it’s sad but there is not much that can realistically be done.
When the files will be purged of your backup, the references to the damaged block in your local database will be removed.
If you run an affected command then it should return nothing, and the full test should succeed (the ‘extra’ stuff is annoying but it should not stop you from restoring).
When the test succeeds, you can then run a backup and if the affected files still exist on your computer, a new backup of these files may happen (if it’s the current version that was removed - if it’s a previous version, you lost it).

I’m not sure if this is adding to confusion, but be careful about exactly which command is being run.

The PURGE command is source file oriented, so could be used if you decide to remove some files. Deselecting them on the Source screen affects future versions. Purge from old versions is optional.

The PURGE-BROKEN-FILES command goes with The LIST-BROKEN-FILES command to tidy up destination file losses by removing damaged source files from backup data. I think this is your case.

Recovering by purging files explains further, but neglects to say that some types of damage can’t be recognized easily (this isn’t a deep test command – I think it just looks at the destination file listing).

Duplicati usually won’t even do a successful download of a damaged file because decryption will fail, however you seemingly got one that decrypted yet didn’t unzip. The full hard drive might have hurt it.

Regardless, once a destination file is proven bad, it’s probably best just to make it disappear either by deletion or hiding (possible future reference?) by moving to another folder or changing duplicati prefix.

Okay, this is definitely not what I wanted. Thank you for telling me to be careful!

Yep, that sounds like it.

I renamed the damaged dblock file, ran LIST-BROKEN-FILES, got a lot of output this time. Then I ran PURGE-BROKEN-FILES, which deleted a few dlist files. It didn’t seem to delete all the files detected in LIST-BROKEN-FILES, but that’s at least a start. Then I ran a backup.

Success! I’m running a --full-block-verification all just to be sure, but given the large amount of time it took to run the backup without incident, I’m fairly confident it will continue working properly. I’ll update this thread once it completes the verification.

Verification complete. The result is over a dozen files with supposedly hundreds of errors regarding Extra. From what I’ve read, this isn’t actually an error but a warning that occurs when the data files have more data in them than expected. Please do let me know if I’m wrong about this and this is something I should intervene on, though!

Backups no longer fail and there are no fatal errors, so I’ll consider it successful.

Thank you for your assistance @ts678 and @gpatel-fr!

1 Like

The block level “Extra” notes that --full-remote-verification option can throw are thought to be harmless (based on a lot of history plus some understanding of the mechanism – but “fix” was scary). Sometimes it’s better to live with benign imperfection than risk breakages by changing design heavily.

One further step you could take is to find out how well your destination actually does disaster recovery because some problems are really hard to spot. Are you doing the full Direct restore from backup files exercise on some other system occasionally? If not, next best is probably to go to Database screen on current setup, find the path, manually rename the database (for protection), then see if you can Repair, ideally just from dlist and dindex files first 70% of progress bar – if it goes past, then look at the live log. Successful recreate in a reasonable time will probably let you keep either database, but new one lacks logs because they’re local and in database, so old DB is likely larger. Pick either and keep going with it.

If the DB recreate doesn’t work, it’s better to find out now than in a disaster, but fixing may get complex.

1 Like

Thanks for the advice. I’m backing up a large amount of data, so direct restore is not something I do regularly. The local database suggestion should work nicely!