Hello!
I had an array failure and I am in the process of restoring to a new datasource.
I have disabled backup operations on the original server housing the storage and the duplicati instance. Do I need to edit my retention policy to prevent existing backups from being purged? (or does it only apply the retention policy when a backup is run?)
I’ve got a few terabytes of data to restore from a cloud backup so it’s going to be lengthy and I’m afraid my retention policy is going to start wiping out my daily backups from the past few days.
Duplicati only does one operation at a time upon manual request or scheduled backup, meaning it won’t spring a delete at you while idle or during restore, and it also looks like delete is only implied by a backup. I’m assuming you’re restoring from most recent backup, and the backup is protected:
By default, the last fileset cannot be removed. This is a safeguard to make sure that all remote data is not deleted by a configuration mistake. Use this flag to disable that protection, such that all filesets can be deleted.
So if I’m reading this all right, it’s hard to delete the last backup, however a surprise pointed out at Why did smart retention delete this backup set? gives a case where upon return to normal usage, backup thinning may delete backups that you’d wish were still there for all the usual reasons such as ability to pick a certain version out of recent work. You could lose many dailies on next backup.
Basically, the reason why you would want to alter normal operation is due to recovery operations. Generally I prefer keeping information around until whatever was wrong has been proven solved.
That’s a fancy way of saying it’s probably best advised to adjust things now, for a different reason.