I have a backup job that reports the backup size to be 442,25 GB:
However when I go into the folder with the actual files, the actual size is almost double, 808 GB:
I have already checked and all files in the folder are either “dblock”, “dindex” or “dlist”, why could there be this mismatch? does the “backup size” mean something different? or could it be that some files shoul’ve been deleted but weren’t?
If it’s the latter, how do I make Duplicati remove them (or how do I find out wich ones to remove myself?)
Thank you for your assistance!
do you have the option no-auto-compact set to true ?
nope, these are my general options:
--compression-extension-file=C:\Program Files\Duplicati 2\default_compressed_extensions.txt
And these are my Backup-specific options:
--send-mail-subject=%PARSEDRESULT%: Duplicati %OPERATIONNAME% report for %backup-name%
My only guess is that maybe the
auto-vacuum are messing things?
auto-vacuum concerns only the database size.
auto-cleanup runs an automatic repair before backup if there are discrepancies between backend and database; if there were some, you would get warnings (unless your backup before script is messing things up ?). You can try to run a repair manually.
I think I found the culprit and it had nothing to do with any of this. I came to see that updates weren’t being made since a week and a half, and I saw that the storage was full. That’s when I saw the mismatch between sizes.
I’ve been investigating into it and I found what the main culprit for storage hoarding was:
temporarily I created a few weeks ago a junction inside the folders being backed up. That junction pointed to the folder that stored the backup. I forgot to remove that junction, so every time a backup was made, it made a backup of the backup files themselves, quickly filling all remaining space.
The backup failed eventually, obviously, and of course it didn’t optimize the size of the backup that it couldn’t even finish.
Now I’ve deleted the junction, I sent a “Purge” command targeting the junction folder, and now it’s running. Hopefully once it ends it will have deleted all extra data.
I’ll update this thread if it indeed turns out that this was it (which I think it clearly is).
That was an important omission. You might be comparing stale Duplicati size to current folder.
You could certainly sort the folder by date to see what part of its bulk is after 8/27 last success.
Don’t expect numbers to line up completely, but you might solve much of that major mismatch.
I think the home page statistics are only updated by successful backups.
For that matter, I think the job log is the same way, otherwise I’d suggest
Viewing the log files of a backup job’s
Complete log BackendStatistics:
BackupListCount should match backup versions, and expect 882 files or so, matching Windows.
Roughly half the files are dblock files, so I guess you set up an oversized
Remote volume size.
After you get things back into working order, and finish a backup, you can check the sizes again.
I just wanted to post here that the cause was indeed the failed backup. It took a while to fix, I had to manually delete the last backup files since the first failed date, ran a database Recreate (delete and repair) and it took a few days to complete, but it eventually rebult the database and started making backups successfully again and reporting the size properly.
Thank you for your help @gpatel-fr and @ts678.
If by that you mean actually doing a manual delete directly on the backend (not with Duplicati), this is a really dangerous operation if you use auto compact (and this is the default)…
I have them backed up in the cloud for 30 days in case I needed to redownload them. It’s just that with them, any attempt at rebuilding the database or executing any command eventually got stuck. Luckily it worked, I’ll keep in mind your suggestion for future situations.
Now it has worked for a few days but a new problem has arisen (I think it’s unrelated since it involves access to the database).
I already opened a different thread for that. I’m sorry for causing so much trouble!