Best way to verify backups on an endpoint that I control

I would like to figure out the best way to verify the integrity of files that Dulicati is backing up to NAS that I setup myself at a family members house, without blowing through my 1TB a month data cap.

So here is a little about my setup,
My largest backups are collections of large files(3-7GB per file on average) that don’t change much, but new ones are added frequently. Currently the size of my backups is about 3TB. I can bring this NAS back to my place if need be, but this defeats the purpose of it being offsite. Both the source and the destination for my backups run on the Debian based Open Media Vault Distro and I am running Duplicati in a docker container, with a persistent /data folder.

From what I read so far on the forums, the best way to verify backups is to simply download the files, but because of the data cap I have from my ISP and how much the connection is already getting used, I estimate that it will take 6 months or longer for Duplicati to check each file once over my WAN.

My first thought for verification bringing the server back to my place temporarily and running a full verification. Then setting up a program that hashes all the files that Duplicati places on the backup server to establish a baseline and regularly comparing that to see if the files have changed. Is this a viable method? Which duplicati file extensions would I want to include in the check?

My second thought was if there is a way to sync the database from my source to the backup server, could I run the verification from there to avoid transferring anything, besides the database, across the network. Could I make something like this work by juggling docker container data folder between the servers every once in a while to verify the backups? I would just have to change the destination so that it would be reachable from the backup server itself, correct?

You can copy the db to the remote location, install duplicati, point the config to the db and run full verification.

To elaborate on that idea, offer an enhancement, and offer an alternative if you want a lower level of checking:

Because you want to avoid your remote Duplicati changing the destination, possibly the test command would be a good method. You can say all for the number of samples, and –full-remote-verification to look inside the files (which does sometimes find issues baked into the file by Duplicati). If you don’t want that level of checks, you could get a simple hash check to guard against bit rot by setting –upload-verification-file, then running the DuplicatiVerify.py script in the Duplicati utility-scripts directory, to see if destination hashes agree with records.