File length is invalid

Hello, i have a lot of error in one of my duplicati backups:

2022-06-01 09:04:58 +02 - [Error-Duplicati.Library.Main.Operation.RepairHandler-RemoteFileVerificationError]: Failed to perform verification for file: duplicati-b149d507025f3457e865a3ab7fe03f8c4.dblock.zip.aes, please run verify; message: File length is invalid

I have tried to repair the backup, but this leads to this error again. How can I repair the backup?

if you have a hosed backup, try to use the command line tool and add the ‘version’ option, set it to ‘0’ to ask to delete the last version and run it. If the bad version is not the last one, repeat.

Thx a lot. I will try it. It would be great to have a button “repair as posible”. This should use different work-arounds :slight_smile:

The errors occur even with the first version.

2022-06-01 17:38:37 +02 - [Error-Duplicati.Library.Main.Operation.RestoreHandler-PatchingFailed]: Failed to patch with remote file: "duplicati-b6c236b6396644147b33da93981a90fba.dblock.zip.aes", message: File length is invalid

Does the --version option also apply to backup creation/repair?

Is it possible that Duplicati searches for all destroyed Duplicati files and all inaccessible backed up files and manages to repair the Duplicati files and back up all missing files from memory that are missing in the backup? I could do a complete new backup, but this one is 7GB and I want to use parts of the old backup if possible. And I always see the problem, even with other Duplicati backups, that when there are errors it is hard to repair with simple means.

I don’t think so. It sems logical that when software creates a backup, it’s one version higher than the last one. And AFAIK there is no backup repair. What can be repaired is the local database from the backup.If the backup is bad it will stay so.

What’s strange is that removing a bad backup leads to another bad backup. Anyway, I think that if files were truncated because of a system (network?) problem, the checksum would be bad. The fact that Duplicati don’t complain about it seems to indicate that it’s correct, it’s just that it has a bad length. While I have not looked up the error message, my guess is that the file length in the dlist file don’t match the real size in the other files (index and/or dblock)

Maybe it’s a software problem instead of the follow-up of a crash. What’s the setup ? backup computer OS, Duplicati version, backend type, install mode (vanilla, customized) ?

When you get these (you posted two examples), are they the 0-byte files you were getting before?
Duplicati depends on the storage not losing its backup. Lost dblock files mean a damaged backup.
Previously you had tried WebDAV, and attempted SMB to Bitrix24. Did these files see any of that?

Your error messages look like they’re from a log file. What log level? If warning, look for prior ones.
This might save you the trouble of having to check files by hand. Another way to scan seems to be
list-broken-files which seems able to be able to quickly compare files against the size in its records.

To be certain that a file is broken, get it and try to decrypt it. Also try a good one to test the method.
AES Crypt is GUI, and easier than Duplicati’s CLI SharpAESCrypt.exe. If file is empty, don’t bother.

“First” meaning the oldest one you can find in the Restore dropdown? That’s bad. Is this issue old?

For dblock files, the backup file is the only copy. Repair can fix missing dindex and dlist from DB info.

There’s no button, but you remove broken backup files, then purge broken source files from backup.

but really what you need is storage that works reliably, if the storage is still giving you corrupted files.

I “think” it seems to check size first. Regardless, I don’t think it continues on with further checking later.
There are other times when it’s just looking at a file size listing, and of course those can’t hash content.
I think by default there’s a file listing and check before the backup unless no-backend-verification is on.

Thank you very much. Unfortunately, it can always happen, especially over the Internet, that a connection is lost or data is lost. A backup system should be built in such a way that it continues to run through redundancy even with defective blocks. Duplicati does not seem to be able to do that. Or are there options to add more redundancy?

I have now deleted the defective dblock.zip.aes files after all attempts failed. The --rebuild-missing-dblock-files option did not bring any solution and failed. purge-broken-files does not work in the web gui unfortunately. Or rather it starts but I don’t see any process or log entries here.

Duplicati.CommandLine.exe purge-broken-files starts with “Listing remote folder …” but then aborts without any message. I do not know what to do.

I use Duplicati on different systems. When it runs well, I love it. When there are problems, often the only successful solution is to recreate the backup. I would have many ideas how Dupicati could be improved.

In this case, the backup runs over a VPN to a file share on a Windows System. All are Windows machines. I am using 2.0.6.3_beta_2021-06-17 and I am using it here with the Windows installer with no special settings.

this explains a lot. Every time that I have seen used CIFS over the Internet, I have seen problems. If you want only Windows systems, I think that recent Windows systems (>= 10) have SSH hence can have SFTP. I’d use that instead if I were you. No need for a VPN in this case. If you use old Windows 7/8 bangers as backups systems, well, Gates help you.

I don’t know of any backup system that is built to handle flaky hardware. Every time I have seen commercial software recommendation for hardware backup system, preconisation is always for top of cost stuff. I am yet to see a salesperson saying on record that well, this third rate NAS could do just as well. If you buy low cost backup systems, that’s your choice, but commercial backup system sellers will laugh at you if you are taking them out because your backup hardware has failed. Serious cloud providers are all using RAID.

This is not simply a small defective block, but an entire missing (default) 50 MB file, correct?
I don’t think any backup program deals well with completely unreliable storage. You can ask.

Some (including Duplicati) have considered PAR2, but it’s not well suited IMO for file losses.
What you’re almost asking for is to store multiple complete file copies remotely. How many?

Duplicati guards against Internet connection glitches many ways. Reported errors are retried.
number-of-retries and retry-delay control this. You can see retries in a log taken at retry level.
About → Show log → Live → Retry is simplest, but you can also log to a log file if you prefer.

Unreported errors, such as successful download with “wrong” content, are caught and retried.
You can watch that. Checks may be size, hash, and whether or not content can be decrypted.

Missing files and files with wrong size are detected by file listing both before and after backup.
This can be turned off. That’s dangerous, but some people may to choose to turn off warnings.

Verifying backend files is also done, but it’s a small sample. This can be optioned higher or off.

You can’t currently add more redundancy, but you can add more checks of file internal content.
backup-test-samples and backup-test-percentage control that. Downloads take time though…

You can also remove the no-backend-verification option if you’re currently configuring that.
This should quite quickly find severe problems like missing files or wrong-length (empty?) ones.

FileBackend file size verification #4691 proposes earlier detection of files uploading wrong size.
This will allow a retry right at time of upload, which is the time when data is available for upload.

It’s somewhat inspired by suspect error reporting/handling with network drives like you’re using.
Even local SMB appears unreliable. It’s not clear why. Duplicati treats shares like ordinary files,
allowing Windows to run its magic underneath. Sometimes, apparently, the magic doesn’t work.

Used where? I think this is also a Repair option (not other operations). Is that where you did it?
From About → Changelog, there is this 2018 comment:

Removed automatic attempts to rebuild dblock files as it is slow and rarely finds all the missing pieces (can be enabled with --rebuild-missing-dblock-files).

The key word might be “all”. Pieces disappear from the source over time as the source changes.

Works fine here in GUI Commandline. It doesn’t say a lot, but you can open another tab to watch.
About → Show log → Live → Verbose might be good. Use Information for less, Profiling for more.

Here’s a list-broken-files quickly telling me about a file that I intentionally emptied to see behavior:

Listing remote folder …
remote file duplicati-b1132cd156c1f4fac9478334f587f427a.dblock.zip.aes is listed as Verified with size 0 but should be 989, please verify the sha256 hash “IFZrMO8aVxL8i0aNVq14T8mEok2OrOg+PwTOZ3QWui4=”

Delete that file, and behavior changes to tell me the implications for source files from the broken file:

Listing remote folder ...
2	: 6/2/2022 9:11:42 AM	(1 match(es))
	C:\backup source\B.txt (1 bytes)
1	: 6/2/2022 9:19:35 AM	(1 match(es))
	C:\backup source\B.txt (1 bytes)
0	: 6/2/2022 9:24:11 AM	(1 match(es))
	C:\backup source\B.txt (1 bytes)

I had emptied the dblock from 9:11, and that affects the later backups. Version 0 is always the latest.

purge-broken-files purges the broken B.txt file from the backup, so has to update three dlist files too:

Listing remote folder ...
  Uploading file (957 bytes) ...
  Deleting file duplicati-20220602T131142Z.dlist.zip.aes ...
  Uploading file (1.03 KB) ...
  Deleting file duplicati-20220602T131935Z.dlist.zip.aes ...
  Uploading file (1.11 KB) ...
  Deleting file duplicati-20220602T132411Z.dlist.zip.aes ...

Controlling write-through behaviors in SMB is Microsoft explaining how caching complicates things.
Although we get problems even on a LAN, going over the Internet is probably making things worse.

@frank might want to see whether forcing writethrough helps. “Continuous availability” might also, however I think that’s something not all servers implement. Also discussed in Veeam blog and site.

SMB/CIFS comes in a large number of flavors, and has lots of options. I’m not expert in all of this.
A few times, the person with issue tried debug with Sysinternals Process Monitor. No conclusions.
You’d like the underlying transport and storage to just work, but sometimes it doesn’t cooperate…

oh yes, and for Internet backups I prefer something with less knobs on it :-). It’s designed by default to run on a LAN. Adding extra delays seems to create headaches.
Edit: also not using CIFS gives an extra bit of security against basic cryptolockers.