There is lots of junk in my bucket

I have retention policy of 14 days. Swell. However, I have thousands of files in the same bucket/folder that appear to be orphaned. I only have one computer backing up to this Wasabi AWS Compatible area.
Rebuilding the database has no effect.

Should I just wrap a tool around it and delete old files from 2018 - 2022 ?

  • File Name
  • File Size
    49.98 MB
  • Last Modified
    Dec 24, 2019 4:00 PM

That means blocks from files 4 years ago still appear in your files from 14 days ago.

1 Like

Yep! Don’t manually delete those files or you will hurt your backup integrity.

To add to both other good posts:

Compacting files at the backend explains how wasted space is cleaned up.
The COMPACT command explains how to tune if you want it working more.
Aggressive compacting does lots of downloading. Wasabi does have limits.
The AFFECTED command given a dblock name, says what source it holds.
GUI Commandline might be an easier place to work than on actual OS CLI.

1 Like

That means you can no longer restore a version of a file (or file at all, if deleted) beyond 14 days back. There might not be much space saved by having such a short retention policy, but you can investigate.

Introduction covers how changed blocks get uploaded. Unchanged blocks get backreferenced instead.
Your old dblock files are possibly the original slowly changing base that all newer backups are built on.

Ah ha moment! Thank you that makes sense. :sunglasses:

Hello, is it correct to assume the default is to compact at the end of every backup job, it should not be necessary to run manually unless I wanted to change the parameters to be more aggressive?

No. Compact is always subject to need to compact, otherwise it’s wasted work.
It should, however, be an automatic event that runs based on its current config.

Thank you. Default is to check if the backup needs to be compacted, if it does, it runs the compact routine.

I still think I have some junk in my bucket :slight_smile:
I deleted jobs, but did not tick also delete the files.
How do I identify the orphaned files? Just start poking at them with the command line utility, there’s a gazillion of them.

From now on I will be sure to preface the files to identify. Can I change the existing job to preface the files? If so will it change the files that already exist?

This will definitely leave orphans, with no hope for compact by job run because job is gone.
Presumably you use different folders (otherwise Duplicati complains), and can delete them.

For orphans of the above sort (if there are others), know what folders you aren’t using now.
You can also find the folders you are using, by inspecting all the jobs. Don’t delete if in use.

Aside from the above deliberately orphaned files, I’m not certain that you actually have any.
If you have a dblock (those are often the big ones) suspect, use affected as above to see.
If it is unused, it deserves a look. If it’s in use, then test until you’re satisfied about situation.

Can Wasabi tell you space per folder? If not, there’s probably some S3 tool that can do that.
Duplicati shows per-job-folder space on the home screen. A Complete log has even more.
If the file counts and total sizes line up roughly with folder, you probably don’t have orphans.


  "KnownFileCount": 756,
  "KnownFileSize": 15560175156,
  "LastBackupDate": "2023-05-22T06:20:01-04:00",
  "BackupListCount": 40,

Make sure your versions (a.k.a. BackupListCount, a.k.a dlist files) get trimmed if you like trim.