This is a summary of some basic methods for testing your backup files without doing a full restore. It is written based on 188.8.131.52 canary but should work just fine going back to at least 184.108.40.206 beta.
This post is a wiki - if you see something incorrect, outdated, or just plain missing, feel free to fix or add it yourself by clicking on the button at the bottom right of this post!
Why would I want to test my backup files?
While by default Duplicati already tests 1 “random” set of backup files (a “fileset” = 1 dindex + 1 dlist + 1 dblock) after each backup run, it’s possible backups are generating more than 1 set of files per run meaning at 1 test per run Duplicati will never get around to testing all files.
Alternatively, maybe you had a “scare” with one of your drives (bad S.M.A.R.T messages or a dropped USB drive) and you want to double check that everything is OK.
Or perhaps you have moved your destination files from one provider to another (maybe even as part of a backup seed) and want to confirm everything got copied around OK.
Why might I NOT want to test my backup files?
verify will download files from your destination - potentially ALL of them. Depending on your connection this can take a while, use up bandwidth (perhaps hitting usage caps), and slow down other things on your network (though you can use the
--throttle-* commands to minimize that issue).
Also, while this is a pretty good method of checking your backups, nothing replaces a good old fashioned FULL RESTORE.
How to run a GUI based test / verify
Click “Commandline …” in the job GUI
verifywill also work - they do the same thing)
Replace all existing “Commandline arguments” with
all(or a specific number of filesets you want to test)
Optionally add additional parameters (either on their own lines in “Commandline arguments” or using the “Advanced options” interface) such as:
--full-remote-verification=true(download, decrypt, compare to database, then test extract & hash check random 30% of contents from each dblock file - otherwise it just does a hash check of the archive file itself, not the individual content files)
--console-log-level=XXXX(show additional info in the console at level XXXX)
--console-log-filter=YYYY(filter console results to those of type YYYY)
Note that there is no need to remove any pre-existing “Advanced parameters” as some might actually be needed (such as
--dbpath). Also, parameters you add as part of a Commandline run are NOT saved so if you are planning to do multiple Commandline runs you will have to add them each time
Click the blue
Run "test" command now(or
Run "verify" command now) button at the bottom of the page
--console-log-filter parameters are only available in version 220.127.116.11 and higher. If using 18.104.22.168 or lower, either don’t use those parameters or replace them with
--log-level=XXXX which will put the info into a log file (but no on the console, sorry).
Here's a working copy/pastable example for 'Commandline arguments'
all --full-remote-verification=true --console-log-level=profiling --console-log-filter=-*.Database.*;-*.SendMail*;-*RemoteOperationGet*;-*.BasicResults*
-*RemoteOperationsGet*; text if you want to see the names of EVERY file downloaded whether or not problems are found with them.
How to run a CLI based test / verify
Duplicati.CommandLine.exe test XXX command where XXX is all the parameters mentioned above. For example:
Duplicati.CommandLine.exe test "[path to my destination]" all --dbpath="[path to my sqlite DB]" --console-log-level=profiling --console-log-filter=-*.Database.*
verify) will download one fileset (1 each of dindex, dlist, and dblock files) at a time to your temp folder (you should see dup-* files coming & going in there), then test them, then delete them - so you shouldn’t need more temp storage than a little more than your “Upload volume size” (dblock) size.
HOWEVER - eventually ALL files will have been downloaded from the destination, so be sure to keep that in mind if you have usage caps (see below info about “random”). There should be no UPLOAD bandwidth usage as part of this process.
For the curious ones out there, Duplicati keeps track of how many times a file has been tested so when “randomly” choosing the next file to test it will only pull from files with the least number of tests logged against them. This means that running multiple partial tests (such as
test 100 instead of
test all) will make sure all files are tested at least once before re-testing files.
This can be handy if you have usage caps to manage as it means you can run partial tests over multiple time periods without worrying about “over-testing” some files while ignoring others.
Personally, I like detailed logs WITHOUT the database calls so I use: