The Operation is not valid for the object's storage class

Duplicati is configured to use S3 as the back-end. S3 is configured to transition data to Glacier after 14 days and to expire data after 99 days.

Receiving the following errors when running a backup

DeletedFiles: 0
DeletedFolders: 0
ModifiedFiles: 0
ExaminedFiles: 7
OpenedFiles: 0
AddedFiles: 0
SizeOfModifiedFiles: 0
SizeOfAddedFiles: 0
SizeOfExaminedFiles: 158267705
SizeOfOpenedFiles: 0
NotProcessedFiles: 0
AddedFolders: 0
TooLargeFiles: 0
FilesWithError: 0
ModifiedFolders: 0
ModifiedSymlinks: 0
AddedSymlinks: 0
DeletedSymlinks: 0
PartialBackup: False
Dryrun: False
MainOperation: Backup
CompactResults: null
DeleteResults:
DeletedSets: []
Dryrun: False
MainOperation: Delete
CompactResults: null
ParsedResult: Success
EndTime: 8/27/2018 5:17:12 AM (1535372232)
BeginTime: 8/27/2018 5:17:12 AM (1535372232)
Duration: 00:00:00.0781250
BackendStatistics:
    RemoteCalls: 17
    BytesUploaded: 0
    BytesDownloaded: 0
    FilesUploaded: 0
    FilesDownloaded: 0
    FilesDeleted: 0
    FoldersCreated: 0
    RetryAttempts: 12
    UnknownFileSize: 0
    UnknownFileCount: 1
    KnownFileCount: 82
    KnownFileSize: 643484201
    LastBackupDate: 8/27/2018 4:33:27 AM (1535369607)
    BackupListCount: 28
    TotalQuotaSpace: 0
    FreeQuotaSpace: 0
    AssignedQuotaSpace: -1
    ReportedQuotaError: False
    ReportedQuotaWarning: False
    ParsedResult: Success
RepairResults: null
TestResults:
MainOperation: Test
Verifications: [
    Key: duplicati-20180801T170018Z.dlist.zip
    Value: [
        Key: Error
        Value: The operation is not valid for the object's storage class
    ],
    Key: duplicati-i8eff908e3fb6475286dc7f8bc01d786c.dindex.zip
    Value: [
        Key: Error
        Value: The operation is not valid for the object's storage class
    ],
    Key: duplicati-b27c4bba820b24c5187e8e042a4cae074.dblock.zip
    Value: [
        Key: Error
        Value: The operation is not valid for the object's storage class
    ]
]
ParsedResult: Success
EndTime: 8/27/2018 5:19:18 AM (1535372358)
BeginTime: 8/27/2018 5:17:12 AM (1535372232)
Duration: 00:02:05.8469760
ParsedResult: Error
EndTime: 8/27/2018 5:19:18 AM (1535372358)
BeginTime: 8/27/2018 5:17:11 AM (1535372231)
Duration: 00:02:07.1282206
Messages: [
No remote filesets were deleted,
removing file listed as Temporary: duplicati-b2ab794285e344d3d84815d57c4be907b.dblock.zip,
removing file listed as Temporary: duplicati-i450e9505fbed4ed3a90d932e3b19f77f.dindex.zip
]
Warnings: []
Errors: [
Failed to process file duplicati-20180801T170018Z.dlist.zip => The operation is not valid for the object's storage class,
Failed to process file duplicati-i8eff908e3fb6475286dc7f8bc01d786c.dindex.zip => The operation is not valid for the object's storage class,
Failed to process file duplicati-b27c4bba820b24c5187e8e042a4cae074.dblock.zip => The operation is not valid for the object's storage class
]

When I run repair I receive a message that everything is good

MainOperation: Repair
RecreateDatabaseResults: null
ParsedResult: Success
EndTime: 8/27/2018 5:25:43 AM (1535372743)
BeginTime: 8/27/2018 5:25:42 AM (1535372742)
Duration: 00:00:00.4389934
Messages: [
    Destination and database are synchronized, not making any changes
]
Warnings: []
Errors: []
BackendStatistics:
    RemoteCalls: 1
    BytesUploaded: 0
    BytesDownloaded: 0
    FilesUploaded: 0
    FilesDownloaded: 0
    FilesDeleted: 0
    FoldersCreated: 0
    RetryAttempts: 0
    UnknownFileSize: 0
    UnknownFileCount: 1
    KnownFileCount: 82
    KnownFileSize: 643484201
    LastBackupDate: 8/27/2018 4:33:27 AM (1535369607)
    BackupListCount: 28
    TotalQuotaSpace: 0
    FreeQuotaSpace: 0
    AssignedQuotaSpace: -1
    ReportedQuotaError: False
    ReportedQuotaWarning: False
    ParsedResult: Success

When I attempt a verify I receive the errors that were listed above. Has anyone else seen this type of issue using S3 as backend?

Hi @rclemens, welcome to the forum!

Duplicati doesn’t work well with cold storage destinations (like Glacier) unless certain settings are used to disable things that pull files back down like validation tests and compacting (pruning expired versions).

Did you configure any of those types of things to not happen?

Thank you for your Response.

I am discovering that you are right. Every error I get is related to a file that has been moved to Glacier. This makes sense since the file is just a place holder and cannot be acted on without requesting it from AWS (3-5 hour wait).

The Duplicati Backup Jobs are configured to Delete Backups older than 90 days. To be honest I haven’t seen any issues because of this.

I am now using the “no-backend-verification” flag are there any other I should look at using?

Definitely don’t let your destination storage prune/delete files. Use Duplicati’s built in retention features. It knows what remote files can be safely deleted once retention criteria are met.

Duplicati also handles versioning on its own. Configure your remote storage to only retain the most recent version of a file in the S3 bucket.

Thanks to everyone for your help in resolving this issue. I have implemented the following Advanced Settings:

–no-backend-verification=true
–no-auto-compact=true

I now no longer receive the error “The Operation is not valid for the object’s storage class.”

Added the following settings

–no-backend-verification=true
–no-auto-compact=true

Worked like a champ

1 Like

Good to hear it’s working for you!

Just do you know what to expect, you might want to try a test restore of some older files/versions as you’ll probably have to manually request the cold storage files be “warmed up” so Duplicati can get to them for the restore.

I’ve not used cold storage myself but I hear it can take a day or two for files to “thaw”. Oh, and it you thaw all the backup files once in a while that might be a good time to kick off a manual compact or verification.

@Pectojin, do you know if a way to get a list of files needed to be able to compete a particular restore? That might be handy for people using for storage or destination cloning (rsync) so they don’t need to pre-fetch ALL the backup files for a small restore.

Maybe the list command is helpful?

Sadly I can’t find any documentation on using it, so I’m not sure exactly how or if it helps.

But it would make sense to be able to check this.