SharpCompress.Compressors.LZMA.DataErrorException

Hello,

I have received these errors:

Feb 14, 2018 7:22 PM: Fatal error
{“ClassName”:“SharpCompress.Compressors.LZMA.DataErrorException”,“Message”:“Data Error”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at SharpCompress.Compressors.LZMA.LzmaStream.Read(Byte buffer, Int32 offset, Int32 count)\r\n at Duplicati.Library.Utility.Utility.ForceStreamRead(Stream stream, Byte buf, Int32 count)\r\n at Duplicati.Library.Main.Volumes.BlockVolumeReader.ReadBlock(String hash, Byte blockbuffer)\r\n at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction)\r\n at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact)\r\n at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)\r\n at Duplicati.Library.Main.Operation.BackupHandler.Run(String sources, IFilter filter)\r\n at Duplicati.Library.Main.Controller.<>c__DisplayClass16_0.b__0(BackupResults result)\r\n at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action`1 method)\r\n at Duplicati.Library.Main.Controller.Backup(String inputsources, IFilter filter)\r\n at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:“8\nRead\nSharpCompress, Version=0.16.2.0, Culture=neutral, PublicKeyToken=afb0a02973931d96\nSharpCompress.Compressors.LZMA.LzmaStream\nInt32 Read(Byte, Int32, Int32)”,“HResult”:-2146233088,“Source”:“SharpCompress”,“WatsonBuckets”:null}

And:

Feb 14, 2018 7:22 PM: Operation Get with file duplicati-bf4034a2d09b8431bbf2a5a9268695fc4.dblock.zip.aes attempt 1 of 5 failed with message: Thread was being aborted.
{“ClassName”:“System.Threading.ThreadAbortException”,“Message”:“Thread was being aborted.”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at System.Net.ConnectStream.Read(Byte buffer, Int32 offset, Int32 size)\r\n at Amazon.Runtime.Internal.Util.CachingWrapperStream.Read(Byte buffer, Int32 offset, Int32 count)\r\n at Amazon.Runtime.Internal.Util.HashStream.Read(Byte buffer, Int32 offset, Int32 count)\r\n at Duplicati.Library.Utility.Utility.CopyStream(Stream source, Stream target, Boolean tryRewindSource, Byte buf)\r\n at Duplicati.Library.Backend.S3Wrapper.GetFileStream(String bucketName, String keyName, Stream target)\r\n at Duplicati.Library.Backend.S3.Get(String remotename, Stream output)\r\n at Duplicati.Library.Main.BackendManager.coreDoGetPiping(FileEntryItem item, IEncryption useDecrypter, Int64& retDownloadSize, String& retHashcode)\r\n at Duplicati.Library.Main.BackendManager.DoGet(FileEntryItem item)\r\n at Duplicati.Library.Main.BackendManager.ThreadRun()\r\n at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\r\n at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\r\n at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)\r\n at System.Threading.ThreadHelper.ThreadStart()",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:“8\nRead\nSystem, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Net.ConnectStream\nInt32 Read(Byte, Int32, Int32)”,“HResult”:-2146233040,“Source”:“System”,“WatsonBuckets”:null}

Which seems to be the same error mentioned here:

I’m making a new post because that one is very old and petered out.

I too am using zip container with LZMA codec. (I no longer am, I switched to plain ‘zip’ but the file in question is still LZMA-in-a-zip.

I found this but:

which mentions that the hash is sha-256

so, thinking the file was corrupt I did

select
	name,
	type,
	size,
	hash
from
	Remotevolume
where
	name = 'duplicati-bf4034a2d09b8431bbf2a5a9268695fc4.dblock.zip.aes';
	
/* Name	Type	Size	Hash
duplicati-bf4034a2d09b8431bbf2a5a9268695fc4.dblock.zip.aes	Blocks	52339149	TsjdaCd7HyZ2ijlKIGjl0cOIhUfQVLtFioqRgzzs+7E= */

--sha256 sum as base64 of the file, as downloaded  								TsjdaCd7HyZ2ijlKIGjl0cOIhUfQVLtFioqRgzzs+7E=

So, I guess it’s not corrupt?

I was able to use sharpaescrypt to decrypt the file and 7zfm could extract it, but it’s full of junk filenames that TRID can’t identify I assume it’s not whole files but deltas from the files.

Am I missing something? Can I delete this from the backend?

(by the way, where is the ui-log from? it’s not in the SQLITE file associated with the backup and the “duplicati-backup.sqlite” file isn’t an Sqlite file)

EDIT: forgot to mention, this file is rather old:

Hello @Ingmyv, welcome to the forum - and sorry for the long response time!


Based on the “Operation Get with file…” failure, I’m guess it is NOT corrupt so much as the download of the file went wonky but Duplicati tried to decompress it anyway.


This file names are hashes of the bock contents. The file contents are individual blocks of your data (at most 100kb each, assuming you’re using Duplicati defaults).

So they’re technically not deltas, but if a particular block is changed and added to the backup you’d have two different versions of that block. Combined with all the other blocks in a file you could then restore either version of the file, if you wanted.


No, I don’t think you’re missing anything. Without seeing more of your logs this is just a guess, but most likely there’s a cleanup step going on trying to remove blocks from a deleted file (or expired version of a fail).

During this compact step Duplicati downloads the current archive (which would have about 512 block files in it, assuming default settings) so it can pull out blocks that are still in use and put them into a new (likely smaller) archive.

So - if you delete the file, you’ll likely be deleting anywhere from 1 to 511 blocks of files that are still in your backup. At some point, it’s likely that all 512 blocks will no longer be needed at which point Duplicati should just delete the file (no download / decryption needed).

But how long it will be before that happens is had to say.


Very long story short, I’m guessing you’re either no longer getting the error (it was a transient download problem) OR you are getting this error every run due to the automatic cleanup step.

If it’s still happening , you could try running with the --no-auto-compact parameter enabled and see if that stops the error - but be aware it’s not a FIX, just a confirmation check (and potential workaround).

--no-auto-compact (default false)
If a large number of small files are detected during a backup, or wasted space is found after deleting backups, the remote data will be compacted. Use this option to disable such automatic compacting and only compact when running the compact command.

Another option would be find what file parts / versions are stored in that archive and purge those versions from the backup.

Hi,

I ran “–full-remote-verification” and it identified a number of files with problems:

duplicati-iaab1f1a308db42be8c76bd3e500abcef.dindex.zip.aes: 511 errors
Extra: +5hEkBSbXo9047dV5Et4OAHU4TgAepcREpYogXZXZII=
Extra: +AuIw2Cj5tRAjOUk9dZSeXCg1RNSW9af92pOKjezDoQ=
Extra: +B4sEGi7FqZLbzDQUv0w97dij+vckoyEI3FNAeMxJl0=
Extra: +DdSF1S9xDBOJtaq9ghJrdR6FmIeEA0zqhmuA2n737g=
Extra: +HIdamXrLCpLWCNGU6SHG+fepomfIj6LlXURwuIN9lo=
Extra: +K/wER6prxJs3D+TmfXpS/6OZPRga8ywQUbfo11B3io=
Extra: +MEoosrKehxKRWPXlfftG6JUhu01TiavRdTy3fFLHk0=
Extra: +MKuG+BIMXL9v5NkfH66+Za7QIK1kWmXEx454C+aq2g=
Extra: +Rc4UoVkKNaJS1laM+DICv1sSP14TspVPsf+0Nbu70M=
Extra: +dRZSt5OxMI3Ln+KSp5LKoXDgtSpN0Ba521CeByZrHQ=
… and 501 more

duplicati-i300c206ba95c41e3888e343a8e70a491.dindex.zip.aes: 523 errors
Extra: +CoFOArfGAsQfVj7Lz8zrN8swrhGoCozKmEAuhUBYRU=
Extra: +hs0Evdd0XkKzTrgnP26whWKtSaTuxoMKooagdN3u6s=
Extra: +j5GRtoEMqGiSEcNdMUxTD5oIp735B+rwk05Jbgt34s=
Extra: +mtu22OOLFeretBdU889FLM31LjPlFKPXH6BBMXECds=
Extra: +pevjaPc6QgeUQZf1JDGLtn0pM6n0V73YucTqMAb+ps=
Extra: +qKUrKJZWwmcyyd1NB4m8NKjZSxzFtuW8i5SV6ITrD8=
Extra: +sSSFWhKZW9NLGHQ/kA0vOF5cR3TKSc2BPUa2Go8FJI=
Extra: /84IA35N8jT6eq1/O4WizL7E2T1rCWVtc8WYXrYj020=
Extra: /BDPGlkq3tjc7tMaoYfa/8DGVEG7GS6Gi3pFM/9iKpE=
Extra: /XeOYMcHphIM057HyngnG0Qh9tyBTYxrjr6o3RfiHDM=
… and 513 more

duplicati-be1f2d6dea6344a28812780718f544db5.dblock.zip.aes: 523 errors
Extra: +CoFOArfGAsQfVj7Lz8zrN8swrhGoCozKmEAuhUBYRU=
Extra: +hs0Evdd0XkKzTrgnP26whWKtSaTuxoMKooagdN3u6s=
Extra: +j5GRtoEMqGiSEcNdMUxTD5oIp735B+rwk05Jbgt34s=
Extra: +mtu22OOLFeretBdU889FLM31LjPlFKPXH6BBMXECds=
Extra: +pevjaPc6QgeUQZf1JDGLtn0pM6n0V73YucTqMAb+ps=
Extra: +qKUrKJZWwmcyyd1NB4m8NKjZSxzFtuW8i5SV6ITrD8=
Extra: +sSSFWhKZW9NLGHQ/kA0vOF5cR3TKSc2BPUa2Go8FJI=
Extra: /84IA35N8jT6eq1/O4WizL7E2T1rCWVtc8WYXrYj020=
Extra: /BDPGlkq3tjc7tMaoYfa/8DGVEG7GS6Gi3pFM/9iKpE=
Extra: /XeOYMcHphIM057HyngnG0Qh9tyBTYxrjr6o3RfiHDM=
… and 513 more

duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes: 1 errors
Error: Data Error

I copied the database, removed them from the server and ran “purge broken”

After that, everything seems OK (I.E. I believe that it put all of the files that were removed from the back end by deleting the zips back onto the backend)

I used “>” redirector to write out the list of “broken” files, or so I thought. I got an empty file for some reason. So now, more out of curiosity than anything else, can I find out which files are in the bad zips?

I took my copy of the database and tried:

select
	count (*)
from
	Blockset bs
left join file f on
	f.BlocksetID = bs.ID
left join block b on
	b.Hash = bs.FullHash
left join Remotevolume rv on
	b.VolumeID = rv.ID
where
	coalesce(
		f.ID,
		f.Path,
		f.BlocksetID,
		f.MetadataID
	) is not null
	and rv.Name in(
		'duplicati-iaab1f1a308db42be8c76bd3e500abcef.dindex.zip.aes',
		'duplicati-i300c206ba95c41e3888e343a8e70a491.dindex.zip.aes',
		'duplicati-be1f2d6dea6344a28812780718f544db5.dblock.zip.aes',
		'duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes'
	);

which yields only 118. (Unsure of the significance of the 5 rows where the blockset table has an entry but no corresponding entry exists in the file table however all have the rv.name of ‘duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes’, the one that the verification identified as “data error”

This seems low given that the verification reports > 500 errors per file.

The example blocks with errors do not seem to point to files:

	select
		f.Path,
		b.Hash as 'b.hash',
		rv.Name,
		rv.Hash as 'rv.hash'
	from
		block b
	left join Remotevolume rv on
		b.VolumeID = rv.ID
	left join blockset bs on
		b.Hash = bs.FullHash
	left join file f on
		f.BlocksetID = bs.id
	where
		b.hash in(
			'+5hEkBSbXo9047dV5Et4OAHU4TgAepcREpYogXZXZII=',
			'+AuIw2Cj5tRAjOUk9dZSeXCg1RNSW9af92pOKjezDoQ=',
			'+B4sEGi7FqZLbzDQUv0w97dij+vckoyEI3FNAeMxJl0=',
			'+DdSF1S9xDBOJtaq9ghJrdR6FmIeEA0zqhmuA2n737g=',
			'+HIdamXrLCpLWCNGU6SHG+fepomfIj6LlXURwuIN9lo=',
			'+K/wER6prxJs3D+TmfXpS/6OZPRga8ywQUbfo11B3io=',
			'+MEoosrKehxKRWPXlfftG6JUhu01TiavRdTy3fFLHk0=',
			'+MKuG+BIMXL9v5NkfH66+Za7QIK1kWmXEx454C+aq2g=',
			'+Rc4UoVkKNaJS1laM+DICv1sSP14TspVPsf+0Nbu70M=',
			'+dRZSt5OxMI3Ln+KSp5LKoXDgtSpN0Ba521CeByZrHQ='
		);
Path |b.hash                                       |Name                                                       |rv.hash                                      |
-----|---------------------------------------------|-----------------------------------------------------------|---------------------------------------------|
     |+5hEkBSbXo9047dV5Et4OAHU4TgAepcREpYogXZXZII= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+AuIw2Cj5tRAjOUk9dZSeXCg1RNSW9af92pOKjezDoQ= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+B4sEGi7FqZLbzDQUv0w97dij+vckoyEI3FNAeMxJl0= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+DdSF1S9xDBOJtaq9ghJrdR6FmIeEA0zqhmuA2n737g= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+HIdamXrLCpLWCNGU6SHG+fepomfIj6LlXURwuIN9lo= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+K/wER6prxJs3D+TmfXpS/6OZPRga8ywQUbfo11B3io= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+MEoosrKehxKRWPXlfftG6JUhu01TiavRdTy3fFLHk0= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+MKuG+BIMXL9v5NkfH66+Za7QIK1kWmXEx454C+aq2g= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+Rc4UoVkKNaJS1laM+DICv1sSP14TspVPsf+0Nbu70M= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |
     |+dRZSt5OxMI3Ln+KSp5LKoXDgtSpN0Ba521CeByZrHQ= |duplicati-b6547415e432d488cbd9f86fbd0495633.dblock.zip.aes |xyjP2Vn4cvBwoeNSdtnYxQ9e1RsWWy45ZsrqtfQRYoI= |

Actually, I think it’s just the opposite. The purge broken command removes from the local database any reference to stuff that was broken at the destination. So by deleting the backend zips you effectively removed those file blocks from your backup.

As for finding out “which files are in bad zips” I’m not sure at this point (assuming when you say “files” you mean “source files that have been backed up”). But somebody else (perhaps @kees-z, @kenkendk, or @Pectojin) might have an idea.

You can do something like this to get an idea of what files are in each volume:

select Remotevolume.Name as 'Volume Name', Block.Hash as 'Block Hash', File.Path as 'File'
from Remotevolume
inner join Block on Remotevolume.ID = Block.VolumeID
inner join BlocksetEntry on BlocksetEntry.BlockID = Block.ID
inner join File on File.BlocksetID = BlocksetEntry.BlocksetID
LIMIT 10;

And if you know your volume name, you can do:

select Remotevolume.Name as 'Volume Name', Block.Hash as 'Block Hash', File.Path as 'File'
from Remotevolume
inner join Block on Remotevolume.ID = Block.VolumeID
inner join BlocksetEntry on BlocksetEntry.BlockID = Block.ID
inner join File on File.BlocksetID = BlocksetEntry.BlocksetID
where Remotevolume.Name = 'duplicati-b4ab5a8e7618243d1993677ef88e2e534.dblock.zip.aes'

It spits out something like this:


This is a good example because you can see that angular_1.6.6.min.js is listed twice with two different block hashes because it’s larger than 100KB and then has to be fit into two blocks :slight_smile:

2 Likes

Ah, Ok. I modified it somewhat:

select Remotevolume.Name as 'Volume Name', Block.Hash as 'Block Hash', File.Path as 'File'
from Remotevolume
inner join Block on Remotevolume.ID = Block.VolumeID
inner join BlocksetEntry on BlocksetEntry.BlockID = Block.ID
inner join File on File.BlocksetID = BlocksetEntry.BlocksetID
where Remotevolume.Name in(
		'duplicati-iaab1f1a308db42be8c76bd3e500abcef.dindex.zip.aes',
		'duplicati-i300c206ba95c41e3888e343a8e70a491.dindex.zip.aes',
		'duplicati-be1f2d6dea6344a28812780718f544db5.dblock.zip.aes',
		'duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes'
	)
	group by File.path
	order by length(file) asc;
Volume Name                                                |Block Hash                                   |File                                                                                                         |
-----------------------------------------------------------|---------------------------------------------|-------------------------------------------------------------------------------------------------------------|
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |4rrDqE3Y1b1JekMO9zlpcjCPVP/DKjins4DSJ9+HKsQ= |E:\Backups\Android\Download.zip                                                                              |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |QirPJmtIshOAYaw2rafWmUL7oeINUJzVU5xD7yRq+1A= |E:\Backups\Android\2017-03-23_16-27.novabackup                                                               |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |+WIH9PxFnxiYP41A04IWkgi2TTpW9N5BcHbNNvQahLg= |E:\Backups\Android\2017-06-14_08-36.novabackup                                                               |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |LsNlVKgdg4252OALcStYDZtm6MHRh847YPa+wV/R6Dw= |E:\Backups\Android\NineSettings-20170614T083518.conf                                                         |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |Nt4s1NIuHp2PRKq5IYzWV6p5CD64Oic4CiC8sXeoCMo= |E:\Backups\Android\backup_Apr_08_2017_9-07-26_AM.seb                                                         |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Video\.nomedia                                                    |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Images\.nomedia                                                   |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Grandstream\x101\2017.11.25\00_0b_82_91_f2_5e.uf                                                  |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Stickers\.nomedia                                                 |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Audio\Sent\.nomedia                                               |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Video\Sent\.nomedia                                               |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Images\Sent\.nomedia                                              |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Voice Notes\.nomedia                                              |
duplicati-be85b7e54e15a4627839019f164b24c44.dblock.zip.aes |47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= |E:\Backups\Android\Whatsapp\Media\WhatsApp Documents\Sent\.nomedia                                           |

[snip]

Looks like nothing for which the ‘revision history’ would be of great importance. Ill spot-check the backup and make sure the files can still be restored

Thanks.

Yes looks good. Files which were deleted off of the disk have not been reuploaded (which is to be expected, and which is acceptable in this context) but otherwise they have.

This drive is itself a backup of a drive that other backup programs or manual backups of programs (such as Whatsapp) write or are written to, so versioning is controlled either by the other backup program or by me. If a file in this drive is deleted it is guaranteed to never be wanted again, so we’re good