Yep, that works too. I think I saw another user talking about setting up a 2nd backup job with the sole purpose of backing up the 1st job’s database file.
If you’re storing in the cloud, that makes sense as the de-duplication will cut down on data transfer. The drawback is that if you end up needing to restore the 1st job you’ll first have to restore the 2nd job to get the DB to then use on the 1st job.
It’s not a big deal - having a small backup of just the database file(s) should restore very quickly, but it’s another step to keep in mind in case of disaster recovery.
I haven’t look specifically at the database recreate process but my guess is there’s just a lot of database processing going on. Remember, this process isn’t just restoring a file from a remote destination - it’s reading through every index file on the destination and adding the entire history of every file contents in 100k (by default) blocks of data.
With a 12TB data source chopped into 100k blocks that’s 128.8 million block records being recreated in the local database for the initial backup. Then add to that however many blocks changed with each new file or historical version of an existing file.
Once some of the more commonly used functionality has been updated and finalized (like the pending improvement in restore browsing speed) I’m hoping time can be found to review the database recreate process for performance (as well as reporting) improvements.
My guess is that one a recreate is started a database will have been created at which point a Repair should be enough to continue from any interruption point, but @kenkendk would know better.
Edit: Note that my guess is WRONG and it appears once a “The database was attempted repaired, but the repair did not complete.” message is received a Recreate action is necessary to recover.
@Bazzu85, have a little patience - kenkendk generally only has time to hit the forums once or twice a week.
@drwtsn32, I don’t know of any list but pretty much any parameter is exposed with a DUPLICATI__ prefix (yes, two ending underscores) and any “-” replaced with “_”.
I ended up putting a ‘set > filename’ in my post-backup script so I could see all the values. here they are in case others are curious. (Some values have been redacted.) This was from version 2.0.2.1.
However, I should note that the list generated from set won’t include ones with no value, so what you got is probably everything you have values for in the particular job on which you ran the set command.
So there are a lot of optional/advanced/less-commonly-used parameters not represented in the above list.
My convenience is to start over with a new job for the 4tb film backup…and doing a database backup every time i manually complete the job for fix the db…