[Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200105T000000Z.dlist.zip

I just tried to restore some pictures but Duplicati did the restore job just fine. No problems at all.
So, iám wondering if I can ignore the 2 created errors because the backup looks fine. Iám just curious what the errors means.

The dlist files are not ordinarily used for Restore, because their information (and dindex info) is in a database. If that database is damaged or its drive breaks, the dlist file is needed to get that version.

Figuring out what this is is probably worthwhile, but you have to get more details than the summary.

If you see an interesting looking error message, try clicking on it. Sometimes it will expand with detail.

Verifying backend files is the testing that was referred to.

At the end of each backup job, Duplicati checks the integrity by downloading a few files from the backend. The contents of these file is checked against what Duplicati expects it to be.


--backup-test-samples = 1
After a backup is completed, some files are selected for verification on the remote backend. Use this option to change how many.

The TEST command is a more technical description of it:

Verifies integrity of a backup. A random sample of dlist, dindex, dblock files is downloaded, decrypted and the content is checked against recorded size values and data hashes. <samples> specifies the number of samples to be tested. If “all” is specified, all files in the backup will be tested. This is a rolling check, i.e. when executed another time different samples are verified than in the first run. A sample consists of 1 dlist, 1 dindex, 1 dblock.

So by default there will often be three files tested, but they wouldn’t all be dlist (and not the same dlist).

Are these completely independent backups, meaning two Duplicati jobs, one to SATA, one to Dropbox? Generally I’d expect SATA to be quite reliable. Network destinations are more likely to have file damage.

Another odd thing is that seeing a duplicati-20200105T000000Z.dlist.zip error on both would say both backups ran at the same time, and that doesn’t happen. Or is this some sort of a sync to Dropbox?

Hi Guys,

I checked the log but there is nothing under the retry about info

If I go to the stored tab and filter on Error I see

Is this something that could help in fixing my problem??

You started a backup at 8:53 PM, watched the live log, and got nothing more? Here’s a normal log:

The three get requests are the test of 1 set of dlist, dindex, and dblock. They’re also in the job log:


Above is the per-job log under your job’s Reporting --> Show log --> General then pick relevant time.
The server log you showed is a different log. See Viewing the log files of a backup job. I thought that original post was getting error messages there. Maybe not. Where were those seen, and did you get messages again from the latest run which seemed to not even log in live log? This all seems strange.

The goal here is to get something like the original problem while watching live and job logs for details.
From what there is in that live log, were you running a Repair before backup? One completed at 8:49.
From my example, the TestHandler probably ran after backup started (8:53) and finished (not shown).
Were error messages even seen on this test? If they didn’t happen, then log problems didn’t matter…

I just run the cloud backup to my dropbox account using the Live button with the following errors.

And I have no idea what it means :frowning: :sweat_smile: :innocent:

Is this now in a completely different error situation than original post got? I see no TestHandler activities, however they might be from live log that is not in the current view. You can either scroll or look at the job log as described earlier. At the moment it looks like it’s in Compacting files at the backend as inferred by the messy message talking about DoCompact. Problem is that it’s getting some oddly tiny files which are only 301 bytes long. The files are by default up to 50 MB large. Maybe these are encrypted empty files? AES Crypt could be used on a local copy to change .zip.aes into a.zip if you want to see if it’s empty.

On job Destination (screen 2) is the Storage type dropdown set to Dropbox, and the SATA one set to Local folder or drive, and no relationship (e.g. sync) from the SATA folder to Dropbox account?

Do you want to take a look using teamviewer or other remote system. Its to difficult for me or should I completely remove the hole backup and program and than only perform a dropbox backup?

I don’t have Unraid or Dropbox, and I don’t want to learn them on yours. Let’s step back to the start:

Is this still the type of problem we’re fighting? Are both naming exactly the same file for their errors?
Is duplicati-20200105T000000Z.dlist.zip still the file? Is either backup valuable? Did any ever work?

Starting over is certainly possible, but I want to make sure it’s OK with you. If you want to start over, Database management has a Delete button that will delete the local database. Before doing it, see whether you can find the remote Dropbox files, as that might have to be a manual deletion. If you’re unsure where they are in Dropbox, a Duplicati backup Delete can ordinarily also delete remote files, however I’d feel better in this unclear situation if it was manual. Records may be confused currently.

Although just deleting job database and job remote files should leave your job configurations intact, below gives examples of how I would expect them to look for backups direct to Dropbox and SATA.

Your Dropbox backup should have a screen like the below to fill out.


Your SATA backup (which I guess we’ll not use for now while trying to get Dropbox up) would be like:


and should be completely independent of the Dropbox path, including any Dropbox sync that it does.

thanks for your patience… Let’s forget Dropbox for now and focus on fixing the Local Backup first.

If I open the log of my LOCAL SATA HDD backup I see constantly 2 warnings and 2 errors

  1. [Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200105T000000Z.dlist.zip
  2. [Error-Duplicati.Library.Main.Operation.TestHandler-FailedToProcessFile]: Failed to process file duplicati-20200123T000000Z.dlist.zip

See also the screenshot

The local backup is everyday successful for what I can see in the image bellow
localbackup overview

I also tried to restore files and I can confirm that it’s working fine. I only receive all the time after creating the backup the two errors en warnings.

There seems to be some sort of file corruption problem, but the one-line summaries in log don’t detail.
About --> Show log --> Live --> Warning should pick up those lines, then I think click will expand them.

For the Errors, it at least names the file, so you can test, maybe with unzip -t for an integrity opinion.
I’m pretty sure FailedToProcessFile has some details in it, unlike the below which may be nearby…

For Warnings, Cant complete backup or database compress is an example of what you might observe, where it looks like a file got through Get fine (and its name is visible), but turned out to be a broken zip.

Is this even with –no-local-blocks added and checkmarked in Advanced options of Options screen? Remember, you’re not doing a restore from the backup itself (which I assume you want) without that… Explained further below, you also might not be using the files with the problem (some names still TBD).

TestHandler samples are chosen somewhat randomly (but balanced), so might affect another Restore.

See above for more information on Duplicati testing. Good news here is that zip files are very easy to integrity test (doesn’t mean they’re just right, but Central Record Header errors will probably be seen).

Using the unzip command with either shell wildcard (watch out for command length limits) or a loop or xargs could be a do-it-yourself integrity test on all the SATA files to see if a pattern of problems exists.

More thorough Duplicati-style testing would use its own test command with the all option for all files.
Currently, the extent of the file issue is not known, but you’ve got at least a couple of troublesome files.

Bad dlist files don’t matter as long as the Database is intact. If you ever Recreate, they will be needed. What generated two Warnings isn’t seen, but I see the time is the same as the dlists, so maybe those.

Given an intact DB, you can rebuild bad dlist files by deleting them, then running a Database Repair. Your DB is probably intact enough to recreate those two files if Restore dropdown for the dates is OK. Note that the time on the dlist filename is UTC, but the time in your Restore dropdown would be local.

Instead of actually deleting files, it would be good to rename them with a prefix to hide them, or move them into a different directory. This will allow examination if needed, or a put-back if that seems better.

If I go to About --> Show log -->Live --> warning it shows me the following

Mar 30, 2020 6:30 AM: Failed to process file duplicati-20200123T000000Z.dlist.zip
{“ClassName”:“System.NullReferenceException”,“Message”:“Object reference not set to an instance of an object”,“Data”:null,“InnerException”:null,“HelpURL”:null,“StackTraceString”:" at SharpCompress.Readers.AbstractReader2[TEntry,TVolume].get_Entry () [0x00000] in <5717dfb1db2745ffb30a27e1fee78b19>:0 \n at SharpCompress.Readers.AbstractReader2[TEntry,TVolume].LoadStreamForReading (System.IO.Stream stream) [0x0001c] in <5717dfb1db2745ffb30a27e1fee78b19>:0 \n at SharpCompress.Readers.AbstractReader`2[TEntry,TVolume].MoveToNextEntry () [0x0002c] in <5717dfb1db2745ffb30a27e1fee78b19>:0 \n at Duplicati.Library.Compression.FileArchiveZip.LoadEntryTable () [0x00105] in :0 \n at Duplicati.Library.Compression.FileArchiveZip.GetEntry (System.String file) [0x00014] in :0 \n at Duplicati.Library.Compression.FileArchiveZip.OpenRead (System.String file) [0x00014] in :0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadFileset () [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.VolumeReaderBase…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x0001b] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Volumes.FilesetVolumeReader…ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00000] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.TestHandler.TestVolumeInternals (Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.Database.IRemoteVolume vol, System.String tf, Duplicati.Library.Main.Options options, System.Double sample_percent) [0x000ac] in <8f1de655bd1240739a78684d845cecc8>:0 \n at Duplicati.Library.Main.Operation.TestHandler.DoRun (System.Int64 samples, Duplicati.Library.Main.Database.LocalTestDatabase db, Duplicati.Library.Main.BackendManager backend) [0x00340] in <8f1de655bd1240739a78684d845cecc8>:0 ",“RemoteStackTraceString”:null,“RemoteStackIndex”:0,“ExceptionMethod”:null,“HResult”:-2147467261,“Source”:“SharpCompress”}

If have setup -no-local-block enabled under the main settings tab, will that give the same options then under what you say?

So do I have to put .old file extension behind this file “duplicati-20200123T000000Z.dlist.zip”?

Thx for your time.Great to have people that wanna help noobs :slight_smile:

You can test the suspect damaged file with zip --test duplicati-20200123T000000Z.dlist.zip
What’s file length via ls -l duplicati-20200123T000000Z.dlist.zip or some other size method?

Does that date (after converting UTC to local time) exist and show files on Restore version dropdown? https://www.timeanddate.com/ can help with time information and conversions, if that would be helpful.

Is this backup large enough that it would be slow or costly to look for other problems before fixing this? Checking would involve downloading everything. Alternatively, you could fix this file and wait for issues.

There’s apparently also something wrong with duplicati-20200105T000000Z.dlist.zip, but I don’t know which file is with which backup. At one time it sounded like you had two separate backups set up.

That should put --no-local-blocks=true in all jobs, which should be fine for ensuring a valid Restore test.

Rename duplicati-20200123T000000Z.dlist.zip to hidden-duplicati-20200123T000000Z.dlist.zip.
A prefix keeps it from being found as a duplicati file relevant to this job. A suffix wouldn’t do that.

If the view of Restore above looked reasonable and you don’t want to check other files, you can Database Repair and duplicati-20200123T000000Z.dlist.zip should be built from database data.

If navigate in my Unraid terminal to the local sata disk and run zip --test duplicati-20200123T000000Z.dlist.zip

I received the following message

What will be the next step from here?

OK, so this confirms the Duplicati claim that the .zip file is bad. Do you know if you ever used the Advanced option –throttle-upload or the speed control at the top of screen to right of status area?

Corruption of files could happen that would escape the size check. The bug is fixed in, but possibly you weren’t on that on Jan 23, because it had only been out for a couple of days by then.

to gather its information while it’s still there, then either assume it’s the only bad one, or scan all.
Seeing the screenshot of LOCAL BACKUP at about 3 TB source makes me worry about speed.

For this file-accessible backup, there’s actually another way to test all files, but it still reads them.
Possibly it will be a little faster than test command. Does python --version say 2.something?
Basically, this is –upload-verification-file, then utility-scripts/DuplicatiVerify.py in your installation folder. If you’re running in Docker, then I’m not sure exactly how you get to that script.

Other option is to fix one file, then maybe discover some others via periodic tests as time passes.
Cranking up the test level is possible. This path should be quick, but it’s unknown what it missed.

Do you have preference for how much effort you want to put in now to see what might be wrong?

Regardless, you have at least duplicati-20200123T000000Z.dlist.zip bad, and you haven’t tested duplicati-20200105T000000Z.dlist.zip with zip --test. That would be another useful thing to do.

You can probably quickly test all of your dlist.zip files (which might not be the only bad ones) with:

unzip -t '*dlist.zip' (type in the quotes)

Well I delete the file and Duplicati is making now backups without errors. So it looks like the local backup issue has been resolved. So thx a lot!!

For the Cloud Dropbox backup I have the following error.

Object reference not set to an instance of an object
Do you have any ideas what that message would mean?

It’s a generic error in software, with 726,000 hits in Google. For Duplicati, it would be necessary to see what happens before it to have any hope of even recognizing it as a specific issue that may be known.

The live log can get some useful information, but getting the best view might take a few tries. Retry is probably a reasonable starting level for the logging dropdown, but it might wind up needing more later.

I though lets start over Cloud based so I removed Dropbox backup en create a complete new one. When the backup has finish I will let you know if I have still errors.


did you finally solved this issue?
I have the same error with dlist, dindex and dblock files.
I wants to back up my NAS folders onto a S3 compatible server.
I set the storage class on ‘GLACIER’ so I am thinking maybe duplicati is not able to check these files at the end of the back up because these are already unavailable. Restauration could take up to 6 hours in Glacier class.
I can manually get back the the files live but maybe it would be better if duplicati could apply the GLACIER flag only to backup files and not dindex, dlist and dblock ?

This thread gives more details about that : Duplicati and S3 Glacier - #4 by WhoopsHelp

If you want to go straight to Glacier, you need to disable compaction and back end verification. You also need to probably use unlimited retention.