Duplicati Backup Exclusion Times and Proactive Repairs?

Hi all,

I know this topic has been circled around before, but I’d like to ask the question directly to see what the best answer will be.

I have locations where Duplicati is backing up servers that have a decently large dataset – in one instance, this dataset is > 2 TB large. I’ve set the backup to use Smart Backup Retention, running once a day beginning at 18:00. In theory this should work fine, however, end users are arriving in the morning around 0700 to find that the server is running very slow. When I log in to check out the problem, the Duplicati backup has often not finished running. In some instances, it’s been hung up while counting files. I attempt to gracefully shut down the backup job, but if this goes on for 10-15 minutes, my only option to get users back up to a productivity level for production is to stop the Duplicati service in Windows Services.

And this brings me to the use-case scenario I’m looking for an answer to: is there a way to tell Duplicati that the window for running backups is 1800-0700 Monday through Friday, and all day Saturday and Sunday? This would prevent me having to manually intervene.

If there is not a way to do this, could I script it using the Windows Task Manager such that at 0700 it stops the Duplicati job or service, and at 1800 it starts the Duplicati job or service back up? I’m worried about the negative ramifications to the backup database and the job by just stopping the Duplicati service, so if there is a way to “hard stop” the Duplicati backup job from the command line, that would work better. Of course, this “hard stop” method would need a way to say “hey, if you haven’t hard stopped within x minutes, kill the service as you’re likely locked up.”

Finally, as noted by many, it seems we as an IT department are chasing a lot of failures that occur because of database issues. Here’s my latest one:

Failed: Found 37160 remote files that are not recorded in local storage, please run repair
Details: Duplicati.Library.Interface.UserInformationException: Found 37160 remote files that are not recorded in local storage, please run repair

Is there a way to just proactively schedule a repair or a delete and recreate? For instance, could I just schedule a proactive repair of the database once a week before the backup begins?

Thanks in advance – I’m just looking for ways to make Duplicati (an awesome product!) a more reliable and robust backup solution for everyone.

Run the backup on a server that has the data mirrored from the original server?

I’ve considered doing just that – use robocopy or rsync to synchronize the large data from the live server over to a NAS, and then back up the NAS. The dataset being large is a complication to the real problem: time.

Within other backup products I’ve used in my career (beginning in 1995) all the way up to current, software has had a way to say “the maximum run time of this backup job is X hours / Y minutes” and you’d give yourself a little buffer – if for some reason the job was supposed to begin at 1800 and it had to be done by 0700, that would be 13 hours, but you would put in “11 hours” or “660 minutes”, considering the possibility that the job may start 2 hours late due to a media malfunction, or the completion of another job may hold up your job. malfunctions were common with tape drives and reel to reel tape, and so were jobs over-staying their welcome and taking too long to complete.

My thought is, either we need a way to say “maximum run time = X minutes” or a way to say “hard stop at this time every weekday.”