Handful of questions for you all. Hope you don’t mind them jammed into one post, but I’m new, so I have many.
Is it normal for backup files to be many 50Mb entries ending in zip.aes? For some reason I was under the impression these would all be zipped into one file when the backup finishes.
If the answer to #1 is yes, this is the norm, is there another way to verify backups besides going into the restore option?
Related to #2, I get errors for every single test job I’ve run, and when I view the logs, all I see are put, get, and list entries that don’t mean much to me. How can I tell what the issues are, and if they affected the integrity of the backup?
I thought I read you should not (or cannot) have a single backup folder for multiple/all backups. I tried saving two different system backups to the same folder and it errored out. Does this mean if I had to back up multiple machines (dozens, hundreds, etc.) that I would need to create a separate destination folder for each system backup?
I have successfully tested a backup to a mapped drive on another local server, but the S3 test jobs fail with similar errors mentioned in #3 above.
Once I get #5 squared away, does anyone know the correct folder path for a folder in S3? I set the connection using this doc (Setting up Amazon S3), but I left the folder path empty. If I created a new folder for a new backup job, would I simply put that folder in the path field, or is more required? (bucket name, slashes, etc.)
I guess I’ll stop there and hope that’s not too much for one post. Any help or push in the right direction so I can help myself would be appreciated.
Yes. 50MB is the default upload volume size. I personally found no need to change it to something else.
Duplicati does lots of checks while it performs backups. If it reports a backup as successful you should be fine. You can do test restores to double check. Looking at the ZIP files directly isn’t the right way - they would need to be decrypted first and even then they’d only contain deduplicated chunks - not your original files in native format.
If an error happens you should be able to click the red popup at the bottom of the Duplicati window to get more details. I’m guessing these errors are because your S3 setup isn’t correct.
Correct - every computer (and even every job on each computer) needs to target a unique location. What I do is create folders in the root of my B2 bucket and use a naming convention like ComputerName-BackupJob
When I was using S3 (before I switched to B2) I created a user in IAM and also an access policy. I configured the access policy to allow necessary rights to the S3 bucket and then linked that policy with the IAM user. The IAM user will have its own access ID and access key that you use with Duplicati.
Leave “Server” at the default, enter your bucket name, leave region/storage class at the default, and for folder path I use “/ComputerName-BackupJob” as I mentioned in #4. AWS Access ID and AWS access Key are for the IAM user you created that has access to this bucket per #5.
I can get into more detail on the S3 stuff if you need. Alternatively, if you’re new to AWS and not really attached to it, you could consider using Backblaze B2. B2 is cheaper (1/4 the cost of S3) and the setup is much simpler - each bucket automatically has its own access ID and key, you don’t need to create IAM users.
I did see the part about the default size, but I was surprised that the final version of the backup was composed of so many smaller parts. NBD, I’ve just never seen backups stored in that fashion and wanted to make sure this was correct and not an error.
I’ve never had one report as successful. They always report errors but don’t say exactly what the error is. I have tested small txt file restores successfully, so I’m not sure what these backup errors are related to.
Yes, those are the errors I was referring to, and all it brings me to was the logs with “put” entries that didn’t make much sense to me.
Ok, understood. Also not what I’m used to, but it’s only a small inconvenience for the price we’re paying.
The S3 bucket has been set up for awhile. It’s where I store other things like last backup OVF exports of vm’s being decommissioned, so I’m not sure why this connection is failing from Duplicati.
EDIT: I should clarify and say the S3 connection doesn’t fail, at least not during set up of the backup job, but any backups I’ve attempted with an S3 destination have all failed.
AWS (and eventually Azure Blob) is what my company uses, so I’m kind of stuck with it. I’m just trying to find another way to protect some systems managed by my group, and also enable users to do restores on their own if necessary. If I can’t get the connection to work, I may try an AWS File Storage Gateway, since I’ve already been successfully backing up to a local file share.