Client Verification File Contains Expired Files

Duplicati version: 2.0.6.3_beta_2021-06-17
OS: Windows 10 21H1, Ubuntu 20.04, Zorin 15.3

I’ve been looking to add some additional verification steps to my regular Duplicati backup schedule to warn of impending problems and allow time to correct them before a restore is needed. I don’t have enough space to do a full restore of all clients, even if they were done one at a time, so I’ve been searching for something that will do some more involved integrity checks using backend data a few times a year.

I recently learned about DuplicatiVerify.py and the --upload-verification-file option from this post, which seem to accomplish what I’m looking for. If I’m understanding things correctly, the verification file, duplicati-verification.json, is a list of all Duplicati files that the client is expecting to be present on the backend. Running DuplicatiVerify.py against a storage location on the backend verifies that these files exist and checks if they are intact based on their hashes.

I added the --upload-verification-file option to all of my existing client backup configurations, ran a full backup on each client, then ran DuplicatiVerify.py on each client’s storage location on the backend server. The results identified several missing files, including many dblock files, for all clients but one. None of the clients have been reporting errors through their Duplicati email reports, so I was intrigued but not yet alarmed. I investigated further and saw that some reported missing files were dlist files that were outside of my usual one month retention time, which, by my understanding, should be missing since they’ve expired. This made me suspicion that most of these missing files were deleted intentionally based on their client’s retention policy.

I remembered that the one client with no missing files (Windows 10) recently had its storage location moved, and it should not have had any backups expire yet. I then performed the following test using this client to get some additional information.

  1. Ran a full backup on the client.
  2. Confirmed no errors were reported by DuplicatiVerify.py on the client’s storage location on the backend server.
  3. As five backups for the client were present on the backend, changed --keep-time=1M to --keep-versions=5 in the client’s backup config to force the oldest backup to expire.
  4. Ran another full backup and saw that the oldest dlist file was deleted as expected.
  5. Ran DuplicatiVerify.py on the backend, which reported that the expired dlist file was missing.
  6. Confirmed that the expired dlist file was present in duplicati-verification.json.

Based on my understanding, these files should not be reported as missing since they were deleted intentionally due to retention policy. The duplicati-verification.json file uploaded by clients seems to be including files that have expired.

Please let me know if I’ve misunderstood something or can provide additional information.

Is this expected behavior, and if not, can it be corrected?

Welcome to the forum @landgarden

Nice writeup. One thing I’d like to know is what Storage Type is used for the Destination, and whether Destination has enough transfer capability to consider verification at Duplicati by The TEST command.

Looks to me like a bug introduced in 2015 when record of deleted files was saved for 2 additional hours:
Added grace-period for incomplete uploads to avoid backups stopping sue to the Apache WebDAV issues.

That’s probably (not tested) how deleted files got in duplicati-verification.json as State 5 to make issues.
DuplicatiVerify.ps1 and false warnings #2989 was the PowerShell report, but fix was only in PowerShell.

is possibly another place it could have been fixed, but maybe somebody thought that was a worse idea.
Regardless, the only place you can get a fast correction is in the Python script. Do you code in Python?

         filename = file["Name"]
         hash = file["Hash"]
         size = file["Size"]
+        state = file["State"]
         
         fullpath = os.path.join(folder, filename)
         if not os.path.exists(fullpath):
-            print "File missing: ", fullpath
-            errorCount += 1
+            if state != 5:
+                print "File missing: ", fullpath
+                errorCount += 1
         else:
             checked += 1
             print "Verifying file ", filename

experimentally seems to fix the issue, but I’m not a Python programmer, so if any are, correct if needed.
Actually, if any Python programmer would like to convert the script for Python 3, that would also be nice.

There is a fixed-in-Canary-but-not-Beta bug leaving old data at end of verification file if new data is less, however I think this can only occur when Duplicati updates the file directly (Type is Local folder or drive).

If you have a fast and free connection to the destination, you might alternatively test all the file content in The TEST command, specifying all for the sample count. That will catch any files that got corrupted…

Regardless, could you file an Issue on this (citing this topic), in the hope that a developer will work on it?

Thanks for your response.

All clients use the SFTP/SSH storage provider except for one, the destination server (Ubuntu 20.04) itself, which uses the local folder or drive provider to backup its own data as it also acts as a NAS. All clients including the destination server send their backups to the same place: A dedicated RAID array on the Ubuntu server.

The destination should have more than enough transfer capability to run a full test on each of the clients, but I’d rather have verification capability on the backend itself. That way I can have one bulk test of all backups that runs a few times a year instead of needing to create and check a test on each client and figure out how to schedule them independently of their regular backups.

If this is the path of least resistance, though, and it doesn’t add a significant amount of time to each client’s backup, I may just do that.

Thanks; I figured someone more familiar with Duplicati development could find this faster than I could. I somehow missed that issue for the Powershell version of the script, so apologies for that.

I do, so I can add your proposed fix in my local copy of the script and test from there.

I actually planned on doing just that originally: I was writing a BASH script to be scheduled on the destination server that looped through each client’s backup location, recreated the client’s database temporarily using duplicati-cli repair, then testing everything in it using duplicati-cli test. While debugging it I learned about DuplicatiVerify.py, which seemed easier and close enough for me.

I will file an issue.

Thanks again for your help.

To download the whole backup could take awhile, which is an issue any way you do it because having files change underneath the verification is a good way to get some complaints about the files that are verified…

backup-test-samples and the below can boost the testing after every backup (as an alternative).
Backup Test block selection logic describes the method, which tries to avoid an all-at-once test,.

  --backup-test-percentage (Integer): The percentage of samples to test after
    a backup
    After a backup is completed, some (dblock, dindex, dlist) files from the
    remote backend are selected for verification. Use this option to specify
    the percentage (between 0 and 100) of files to test. If the
    backup-test-samples option is also provided, the number of samples tested
    is the maximum implied by the two options. If the no-backend-verification
    option is provided, no remote files are verified.
    * default value: 0

Great. BTW I was having second thoughts on Python 3 before checking availability on old LTS releases.
I know I had to specifically install the end-of-life Python 2 on my rather recent test system, but it existed.

This is actually a great practice, as it proves that it can be done in a timely manner. One can seemingly have a perfectly “intact” (meaning good hashes) set of destination files that can’t actually achieve that…

Large backups with default 100 KB blocksize can surprise people, but at least that’s predictable (but too late sometimes because it’s a one-time setting at first backup), but other inconsistencies can cause the entire backup to be read, instead of just dlist and dindex files. This is also often a futile search for data…

I just ran into this problem as well. I’m glad there was a previous report of it.

I couldn’t find an issue on github for this – did one ever get created? If not, I could create one.

I can’t either, so please go ahead and file it.

I apologize for not filing the issue. Some life stuff happened that caused this to be substantially lowered in priority. If someone else could submit the issue I’d appreciate it.

I can at least verify that the Python code changes posted above appear to have resolved the issue for me.

1 Like

Created issue 4666.

The code changes above resolved the issue for me as well. I understand about life stuff happening. Thanks for posting the issue to begin with, it saved me a lot of thrashing around.