[Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200105T000000Z.dlist.zip

Hi All,

I Installed Duplicati on my UNRAID machine and I configure a backup to my local SATA drive and to my Dropbox account.

They both work but after running they show the following message 3 times: “[Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200105T000000Z.dlist.zip”

Who knows what it means and how can I fix it?

Sounds like the testing portion (which is done at the end of a backup job) is failing. Can you try to do a test restore? You should enable the --no-local-blocks option to make sure the restored data comes only from the remote storage location.

Thx for the response,

Can you tell me where I can find the option to select --no-local-blocks?

Met vriendelijke groet,

Tom van Dijken

In Duplicati UI, click the main Settings menu. Scroll to the bottom where you see the “Default options” section. In the “Add advanced option” dropdown, pick “no-local-blocks”. Then scroll up a bit to see the no-local-blocks option was added. Check the box and then scroll to the bottom and click OK.

Hello, I am having a similar error, only some days, the file is always different. I will try the test restore.
How can I find which file is referenced?
2020-02-22 04:44:51 -05 - [Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-b192a11fc1abf420d88f0a88dc16ececc.dblock.zip.aes

Version 2.0.4.37_Canary_2019-12-12 on Server 2008R2

Sorry for the delay. But I wanna let you know that the errors are still active. The No-local-blocks did not solve this problem.

I also tried to use http-operation-timeout end change it to 5 minutes.

Everyday I receive the same 2 errors:

[Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200105T000000Z.dlist.zip

and
Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200123T000000Z.dlist.zip

Hope you will have another option to try

Thx

Just to be clear, that was a suggested setting to put in place before you do a test restore. It wouldn’t help with your “FailedToProcessFile” error.

Have you tried doing a test restore? Any errors?

Try watching the action with live log at About --> Show log --> Live --> Retry, otherwise the one line summary is pretty useless because the error details are on the lines below it that you didn’t see…

I just tried to restore some pictures but Duplicati did the restore job just fine. No problems at all.
So, iám wondering if I can ignore the 2 created errors because the backup looks fine. Iám just curious what the errors means.

The dlist files are not ordinarily used for Restore, because their information (and dindex info) is in a database. If that database is damaged or its drive breaks, the dlist file is needed to get that version.

Figuring out what this is is probably worthwhile, but you have to get more details than the summary.

If you see an interesting looking error message, try clicking on it. Sometimes it will expand with detail.

Verifying backend files is the testing that was referred to.

At the end of each backup job, Duplicati checks the integrity by downloading a few files from the backend. The contents of these file is checked against what Duplicati expects it to be.

–backup-test-samples

--backup-test-samples = 1
After a backup is completed, some files are selected for verification on the remote backend. Use this option to change how many.

The TEST command is a more technical description of it:

Verifies integrity of a backup. A random sample of dlist, dindex, dblock files is downloaded, decrypted and the content is checked against recorded size values and data hashes. <samples> specifies the number of samples to be tested. If “all” is specified, all files in the backup will be tested. This is a rolling check, i.e. when executed another time different samples are verified than in the first run. A sample consists of 1 dlist, 1 dindex, 1 dblock.

So by default there will often be three files tested, but they wouldn’t all be dlist (and not the same dlist).

Are these completely independent backups, meaning two Duplicati jobs, one to SATA, one to Dropbox? Generally I’d expect SATA to be quite reliable. Network destinations are more likely to have file damage.

Another odd thing is that seeing a duplicati-20200105T000000Z.dlist.zip error on both would say both backups ran at the same time, and that doesn’t happen. Or is this some sort of a sync to Dropbox?

Hi Guys,

I checked the log but there is nothing under the retry about info

If I go to the stored tab and filter on Error I see

Is this something that could help in fixing my problem??

You started a backup at 8:53 PM, watched the live log, and got nothing more? Here’s a normal log:

The three get requests are the test of 1 set of dlist, dindex, and dblock. They’re also in the job log:

image

Above is the per-job log under your job’s Reporting --> Show log --> General then pick relevant time.
The server log you showed is a different log. See Viewing the log files of a backup job. I thought that original post was getting error messages there. Maybe not. Where were those seen, and did you get messages again from the latest run which seemed to not even log in live log? This all seems strange.

The goal here is to get something like the original problem while watching live and job logs for details.
From what there is in that live log, were you running a Repair before backup? One completed at 8:49.
From my example, the TestHandler probably ran after backup started (8:53) and finished (not shown).
Were error messages even seen on this test? If they didn’t happen, then log problems didn’t matter…

I just run the cloud backup to my dropbox account using the Live button with the following errors.

And I have no idea what it means :frowning: :sweat_smile: :innocent:

Is this now in a completely different error situation than original post got? I see no TestHandler activities, however they might be from live log that is not in the current view. You can either scroll or look at the job log as described earlier. At the moment it looks like it’s in Compacting files at the backend as inferred by the messy message talking about DoCompact. Problem is that it’s getting some oddly tiny files which are only 301 bytes long. The files are by default up to 50 MB large. Maybe these are encrypted empty files? AES Crypt could be used on a local copy to change .zip.aes into a.zip if you want to see if it’s empty.

On job Destination (screen 2) is the Storage type dropdown set to Dropbox, and the SATA one set to Local folder or drive, and no relationship (e.g. sync) from the SATA folder to Dropbox account?

Do you want to take a look using teamviewer or other remote system. Its to difficult for me or should I completely remove the hole backup and program and than only perform a dropbox backup?

I don’t have Unraid or Dropbox, and I don’t want to learn them on yours. Let’s step back to the start:

Is this still the type of problem we’re fighting? Are both naming exactly the same file for their errors?
Is duplicati-20200105T000000Z.dlist.zip still the file? Is either backup valuable? Did any ever work?

Starting over is certainly possible, but I want to make sure it’s OK with you. If you want to start over, Database management has a Delete button that will delete the local database. Before doing it, see whether you can find the remote Dropbox files, as that might have to be a manual deletion. If you’re unsure where they are in Dropbox, a Duplicati backup Delete can ordinarily also delete remote files, however I’d feel better in this unclear situation if it was manual. Records may be confused currently.

Although just deleting job database and job remote files should leave your job configurations intact, below gives examples of how I would expect them to look for backups direct to Dropbox and SATA.

Your Dropbox backup should have a screen like the below to fill out.

image

Your SATA backup (which I guess we’ll not use for now while trying to get Dropbox up) would be like:

image

and should be completely independent of the Dropbox path, including any Dropbox sync that it does.

thanks for your patience… Let’s forget Dropbox for now and focus on fixing the Local Backup first.

If I open the log of my LOCAL SATA HDD backup I see constantly 2 warnings and 2 errors

  1. [Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200105T000000Z.dlist.zip
  2. [Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200123T000000Z.dlist.zip

See also the screenshot

The local backup is everyday successful for what I can see in the image bellow
localbackup overview

I also tried to restore files and I can confirm that it’s working fine. I only receive all the time after creating the backup the two errors en warnings.

There seems to be some sort of file corruption problem, but the one-line summaries in log don’t detail.
About → Show log → Live → Warning should pick up those lines, then I think click will expand them.

For the Errors, it at least names the file, so you can test, maybe with unzip -t for an integrity opinion.
I’m pretty sure FailedToProcessFile has some details in it, unlike the below which may be nearby…

For Warnings, Cant complete backup or database compress is an example of what you might observe, where it looks like a file got through Get fine (and its name is visible), but turned out to be a broken zip.

Is this even with –no-local-blocks added and checkmarked in Advanced options of Options screen? Remember, you’re not doing a restore from the backup itself (which I assume you want) without that… Explained further below, you also might not be using the files with the problem (some names still TBD).

TestHandler samples are chosen somewhat randomly (but balanced), so might affect another Restore.

See above for more information on Duplicati testing. Good news here is that zip files are very easy to integrity test (doesn’t mean they’re just right, but Central Record Header errors will probably be seen).

Using the unzip command with either shell wildcard (watch out for command length limits) or a loop or xargs could be a do-it-yourself integrity test on all the SATA files to see if a pattern of problems exists.

More thorough Duplicati-style testing would use its own test command with the all option for all files.
Currently, the extent of the file issue is not known, but you’ve got at least a couple of troublesome files.

Bad dlist files don’t matter as long as the Database is intact. If you ever Recreate, they will be needed. What generated two Warnings isn’t seen, but I see the time is the same as the dlists, so maybe those.

Given an intact DB, you can rebuild bad dlist files by deleting them, then running a Database Repair. Your DB is probably intact enough to recreate those two files if Restore dropdown for the dates is OK. Note that the time on the dlist filename is UTC, but the time in your Restore dropdown would be local.

Instead of actually deleting files, it would be good to rename them with a prefix to hide them, or move them into a different directory. This will allow examination if needed, or a put-back if that seems better.

If I go to About → Show log -->Live → warning it shows me the following

Mar 30, 2020 6:30 AM: Failed to process file duplicati-20200123T000000Z.dlist.zip
{“ClassName”:“System.NullReferenceException”,“Message”:“Object reference not set to an instance of an object”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at SharpCompress.Readers.AbstractReader2[TEntry,TVolume].get_Entry () [0x00000] in <5717dfb1db2745ffb30a27e1fee78b19>:0 \n at SharpCompress.Readers.AbstractReader2[TEntry,TVolume].LoadStreamForReading (System.IO.Stream stream) [0x0001c] in <5717dfb1db2745ffb30a27e1fee78b19>:0 \n at SharpCompress.Readers.AbstractReader`2[TEntry,TVolume].MoveToNextEntry () [0x0002c] in <5717dfb1db2745ffb30a27e1fee78b19>:0 \n at Duplicati.Library.Compression.FileArchiveZip.LoadEntryTable () [0x00105] in :0 \n at Duplicati.Library.Compression.FileArchiveZip.GetEntry (System.String file) [0x00014] in :0 \n at Duplicati.Library.Compression.FileArchiveZip.OpenRead (System.String file) [0x00014] in :0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadFileset () [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x0001b] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.FilesetVolumeReader…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.TestHandler.TestVolumeInternals (Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.Database.IRemoteVolume vol, System.String tf, Duplicati.Library.Main.Options options, System.Double sample_percent) [0x000ac] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.TestHandler.DoRun (System.Int64 samples, Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.BackendManager backend) [0x00340] in <8f1de655bd1240739a78684d845cecc8>:0 ",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2147467261,“Source”:“SharpCompress”}

If have setup -no-local-block enabled under the main settings tab, will that give the same options then under what you say?

So do I have to put .old file extension behind this file “duplicati-20200123T000000Z.dlist.zip”?

Thx for your time.Great to have people that wanna help noobs :slight_smile:

You can test the suspect damaged file with zip --test duplicati-20200123T000000Z.dlist.zip
What’s file length via ls -l duplicati-20200123T000000Z.dlist.zip or some other size method?

Does that date (after converting UTC to local time) exist and show files on Restore version dropdown? https://www.timeanddate.com/ can help with time information and conversions, if that would be helpful.

Is this backup large enough that it would be slow or costly to look for other problems before fixing this? Checking would involve downloading everything. Alternatively, you could fix this file and wait for issues.

There’s apparently also something wrong with duplicati-20200105T000000Z.dlist.zip, but I don’t know which file is with which backup. At one time it sounded like you had two separate backups set up.

That should put --no-local-blocks=true in all jobs, which should be fine for ensuring a valid Restore test.

Rename duplicati-20200123T000000Z.dlist.zip to hidden-duplicati-20200123T000000Z.dlist.zip.
A prefix keeps it from being found as a duplicati file relevant to this job. A suffix wouldn’t do that.

If the view of Restore above looked reasonable and you don’t want to check other files, you can Database Repair and duplicati-20200123T000000Z.dlist.zip should be built from database data.