I have retention policy of 14 days. Swell. However, I have thousands of files in the same bucket/folder that appear to be orphaned. I only have one computer backing up to this Wasabi AWS Compatible area.
Rebuilding the database has no effect.
Should I just wrap a tool around it and delete old files from 2018 - 2022 ?
File Name
duplicati-b00def39caa534ddb9edcb452ae0da6ce.dblock.zip.aes
Compacting files at the backend explains how wasted space is cleaned up. The COMPACT command explains how to tune if you want it working more.
Aggressive compacting does lots of downloading. Wasabi does have limits. The AFFECTED command given a dblock name, says what source it holds.
GUI Commandline might be an easier place to work than on actual OS CLI.
That means you can no longer restore a version of a file (or file at all, if deleted) beyond 14 days back. There might not be much space saved by having such a short retention policy, but you can investigate.
Introduction covers how changed blocks get uploaded. Unchanged blocks get backreferenced instead.
Your old dblock files are possibly the original slowly changing base that all newer backups are built on.
Hello, is it correct to assume the default is to compact at the end of every backup job, it should not be necessary to run manually unless I wanted to change the parameters to be more aggressive?
No. Compact is always subject to need to compact, otherwise it’s wasted work.
It should, however, be an automatic event that runs based on its current config.
I still think I have some junk in my bucket
I deleted jobs, but did not tick also delete the files.
How do I identify the orphaned files? Just start poking at them with the command line utility, there’s a gazillion of them.
From now on I will be sure to preface the files to identify. Can I change the existing job to preface the files? If so will it change the files that already exist?
This will definitely leave orphans, with no hope for compact by job run because job is gone.
Presumably you use different folders (otherwise Duplicati complains), and can delete them.
For orphans of the above sort (if there are others), know what folders you aren’t using now.
You can also find the folders you are using, by inspecting all the jobs. Don’t delete if in use.
Aside from the above deliberately orphaned files, I’m not certain that you actually have any.
If you have a dblock (those are often the big ones) suspect, use affected as above to see.
If it is unused, it deserves a look. If it’s in use, then test until you’re satisfied about situation.
Can Wasabi tell you space per folder? If not, there’s probably some S3 tool that can do that.
Duplicati shows per-job-folder space on the home screen. A Complete log has even more.
If the file counts and total sizes line up roughly with folder, you probably don’t have orphans.