Storage being filled up


I already checked several threads regarding this issue but it seems like none matches mine.

The instance on my home server is slowly filling up my drive. I use Duplicati in Docker.
The Duplicati config directory uses 141GB with just 2 Backup Plans and the file dates are even from 2020 and 2019.

The version is
And I even added auto-cleanup to each plan without any success.

This is the folder:

-rw-------  1 root root 1.5G Oct 13 07:00  UCXUKVJSMS.sqlite
-rw-------  1 root root 5.6M Oct 13 07:00  UCXUKVJSMS.sqlite-journal
-rw-------  1 root root 604K Oct 13 06:42  Duplicati-server.sqlite
-rw-------  1 root root 2.5G Oct 13 03:57  UCXUKVJSMS.backup-56
-rw-------  1 root root  38G Oct 13 03:57  88787068678977808683.backup
-rw-------  1 root root 5.6G Oct 13 03:57  88787068678977808683.sqlite
-rw-------  1 root root 2.4G Oct 10 01:00  UCXUKVJSMS.backup-55
-rw-------  1 root root 2.4G Oct  8 01:00  UCXUKVJSMS.backup-54
-rw-------  1 root root 2.5G Oct  7 01:00  UCXUKVJSMS.backup-53
-rw-------  1 root root 2.5G Oct  3 01:00  UCXUKVJSMS.backup-52
-rw-------  1 root root 2.5G Sep 25 01:00  UCXUKVJSMS.backup-51
-rw-------  1 root root 1.8G Sep 18 01:00  UCXUKVJSMS.backup-50
-rw-------  1 root root 1.8G Sep 17 01:00  UCXUKVJSMS.backup-49
-rw-------  1 root root 1.8G Sep 16 01:00  UCXUKVJSMS.backup-48
-rw-------  1 root root 1.9G Sep 15 01:00  UCXUKVJSMS.backup-47
-rw-------  1 root root 2.4G Sep  6 03:00  OXPKYFAMWC.backup-9
-rw-------  1 root root 2.4G Aug 30 03:00  OXPKYFAMWC.backup-8
-rw-------  1 root root 1.6G Aug 19 01:00  UCXUKVJSMS.backup-46
-rw-------  1 root root 1.6G Aug 18 01:00  UCXUKVJSMS.backup-45
-rw-------  1 root root 1.5G Aug 16 01:00  UCXUKVJSMS.backup-44
-rw-------  1 root root 1.5G Aug 15 01:00  UCXUKVJSMS.backup-43
-rw-------  1 root root 1.5G Aug 14 01:00  UCXUKVJSMS.backup-42
-rw-------  1 root root 1.5G Aug 13 01:00  UCXUKVJSMS.backup-41
-rw-------  1 root root 1.5G Aug 11 01:00  UCXUKVJSMS.backup-40
-rw-------  1 root root 1.5G Aug  9 01:00  UCXUKVJSMS.backup-39
-rw-------  1 root root 823M Aug  7 01:00  UCXUKVJSMS.backup-38
-rw-------  1 root root 823M Aug  6 01:00  UCXUKVJSMS.backup-37
-rw-------  1 root root 823M Aug  5 01:00  UCXUKVJSMS.backup-36
-rw-------  1 root root 826M Aug  4 01:00  UCXUKVJSMS.backup-35
-rw-------  1 root root 1.5G Aug  2 03:00  OXPKYFAMWC.backup-7
-rw-------  1 root root 1.5G Jul 26 03:00  OXPKYFAMWC.backup-6
-rw-------  1 root root 1.4G Jul 12 03:00  OXPKYFAMWC.backup-5
-rw-------  1 root root 1.5G Jul  5 03:00  OXPKYFAMWC.backup-4
drwxr-xr-x  2 root root 4.0K Jul  3 04:00  custom-cont-init.d
drwxr-xr-x  2 root root 4.0K Jul  3 04:00  custom-services.d
-rw-------  1 root root 1.4G Jun 21 04:00  XMQGDGVQKA.backup-5
-rw-------  1 root root 1.3G Jun  7 03:00  OXPKYFAMWC.backup-3
-rw-------  1 root root 823M May 27 01:00  UCXUKVJSMS.backup-34
-rw-------  1 root root 823M May 26 01:00  UCXUKVJSMS.backup-33
-rw-------  1 root root 823M May 25 01:00  UCXUKVJSMS.backup-32
-rw-------  1 root root 1.2G May 24 03:00  OXPKYFAMWC.backup-2
-rw-------  1 root root 823M May 24 01:00  UCXUKVJSMS.backup-31
-rw-------  1 root root 849M May 23 01:00  UCXUKVJSMS.backup-30
-rw-------  1 root root 1.2G May 10 02:00  BTLIXKKHGG.backup-2
-rw-------  1 root root 830M May  9 01:00  UCXUKVJSMS.backup-29
-rw-------  1 root root 830M May  8 01:00  UCXUKVJSMS.backup-28
-rw-------  1 root root 830M May  7 01:00  UCXUKVJSMS.backup-27
-rw-------  1 root root 821M May  6 01:00  UCXUKVJSMS.backup-26
-rw-------  1 root root 648M May  4 05:38 'backup BTLIXKKHGG 20210504073805.sqlite'
drwxr-xr-x  4 root root 4.0K May  4 04:00  .config
-rw-------  1 root root 318M May  3 01:51 'backup UCXUKVJSMS 20210504070213.sqlite'
-rw-------  1 root root 318M May  3 01:00  UCXUKVJSMS.backup-25
-rw-------  1 root root  25G May  3 00:23 'backup 88787068678977808683 20210504060044.sqlite'
-rw-------  1 root root 321M May  2 01:00  UCXUKVJSMS.backup-24
-rw-------  1 root root 317M Apr 30 01:00  UCXUKVJSMS.backup-23
-rw-------  1 root root 317M Apr 29 01:00  UCXUKVJSMS.backup-22
-rw-------  1 root root 317M Apr 28 01:00  UCXUKVJSMS.backup-21
-rw-------  1 root root 317M Apr 27 01:00  UCXUKVJSMS.backup-20
-rw-------  1 root root 604M Apr 26 11:04 'backup XMQGDGVQKA 20210505020251.sqlite'
-rw-------  1 root root 558M Apr 26 04:47 'backup OXPKYFAMWC 20210504020234.sqlite'
-rw-------  1 root root 558M Apr 26 03:00  OXPKYFAMWC.backup-1
-rw-------  1 root root  98M Apr 26 02:00  BTLIXKKHGG.backup-1
-rw-------  1 root root 317M Apr 26 01:00  UCXUKVJSMS.backup-19
-rw-------  1 root root 319M Apr 25 01:00  UCXUKVJSMS.backup-18
-rw-------  1 root root 315M Apr 23 01:00  UCXUKVJSMS.backup-17
-rw-------  1 root root 315M Apr 22 01:00  UCXUKVJSMS.backup-16
-rw-------  1 root root 315M Apr 21 01:00  UCXUKVJSMS.backup-15
-rw-------  1 root root 317M Apr 20 01:00  UCXUKVJSMS.backup-14
-rw-------  1 root root 540M Apr 19 03:00  OXPKYFAMWC.backup
-rw-------  1 root root  99M Apr 19 02:00  BTLIXKKHGG.backup
-rw-------  1 root root 315M Apr 18 01:00  UCXUKVJSMS.backup-13
-rw-------  1 root root 315M Apr 17 01:00  UCXUKVJSMS.backup-12
-rw-------  1 root root 315M Apr 16 01:00  UCXUKVJSMS.backup-11
-rw-------  1 root root 315M Apr 15 01:00  UCXUKVJSMS.backup-10
-rw-------  1 root root 315M Apr 14 01:00  UCXUKVJSMS.backup-9
-rw-------  1 root root 317M Apr 13  2021  UCXUKVJSMS.backup-8
-rw-------  1 root root 314M Apr 11  2021  UCXUKVJSMS.backup-7
-rw-------  1 root root 314M Apr 10  2021  UCXUKVJSMS.backup-6
-rw-------  1 root root 314M Apr  9  2021  UCXUKVJSMS.backup-5
-rw-------  1 root root 314M Apr  8  2021  UCXUKVJSMS.backup-4
-rw-------  1 root root 314M Apr  7  2021  UCXUKVJSMS.backup-3
-rw-------  1 root root 314M Apr  6  2021  UCXUKVJSMS.backup-2
-rw-------  1 root root 314M Apr  5  2021  UCXUKVJSMS.backup-1
-rw-------  1 root root 300M Apr  4  2021  UCXUKVJSMS.backup
-rw-------  1 root root 1.7M Mar 30  2021  XMQGDGVQKA.backup-4
-rw-------  1 root root 3.2M Mar 30  2021  XMQGDGVQKA.backup-3
-rw-------  1 root root 1.7M Mar 30  2021  XMQGDGVQKA.backup-2
-rw-------  1 root root 3.2M Mar 30  2021  XMQGDGVQKA.backup-1
-rw-------  1 root root 132K Mar 30  2021  XMQGDGVQKA.backup
-rw-------  1 root root 562M Jan 18  2020 'backup 88787068678977808683 20200120120000.sqlite'
-rw-------  1 root root 120K Jan  5  2020 'Sicherung 20200105115926.sqlite'
-rw-------  1 root root 1.4G Jan  5  2020  66767281859082708788.sqlite
-rw-------  1 root root 2.3M Jan  5  2020  84847975877189658682.sqlite
-rw-------  1 root root 120K Jan  5  2020 'Sicherung 20200105113243.sqlite'
-rw-------  1 root root 120K Sep 29  2019 'Sicherung 20190929084141.sqlite'
-rw-------  1 root root 113M Sep 29  2019  86866589848967869066.sqlite
-rw-------  1 root root 120K Sep 29  2019 'Sicherung 20190929082237.sqlite'
-rw-------  1 root root 124K Sep 29  2019  70897066838571827572.sqlite
-rw-------  1 root root 124K Sep 29  2019  70897066838571827572.backup-1
-rw-------  1 root root 120K Sep 29  2019 'Sicherung 20190929081856.sqlite'
-rw-------  1 root root 120K Sep 29  2019 'Sicherung 20190929081822.sqlite'
-rw-------  1 root root 120K Sep 29  2019 'Sicherung 20190929081754.sqlite'
-rw-------  1 root root 128K Aug 27  2019  70897066838571827572.backup
-rw-------  1 root root 120K Aug 27  2019 'Sicherung 20190827022547.sqlite'
-rw-------  1 root root 120K Aug 27  2019 'Sicherung 20190827022519.sqlite'
-rw-------  1 root root 3.2K Aug 27  2019  windows-backup
drwxr-xr-x  3 root root 4.0K Aug 27  2019  .mono
drwxr-xr-x  2 root root 4.0K Aug 27  2019  control_dir_v2

Welcome to the forum @Anon

Files ending in <date>.sqlite are probably automatic backups when Duplicati upgrades the DB format.
Old ones go stale very quickly, assuming the backup for them is still running, and they can be deleted.
You can probably also use ls -lurt on the folder to check last read times, to confirm no current use:

How to limit or delete automatic local database backups
Limit sqlite database backups - feedback pleas

Some such files will be harder to match to current database because the original DB name isn’t there:

Differens in database files, what are they?

Did it make things worse? I wonder if that’s what’s making files ending in backup or backup-<number>?

Do you run backup in GUI? If so, do you get clean runs without popup warnings, errors, or complaints?
You can see About → Show log → Live → Information at start of backup. Maybe you’ll see above lines.

How many backup jobs do you currently run, or care about? You can look up their Local database path.
Be sure to keep those. I see only two seemingly active. Was OXPKYFAMWC job deleted very recently?
88787068678977808683.sqlite would be one created from an older release, maybe or
I’m not sure how it managed to make a .backup version of the same timestamp and far greater size…


thank you and thanks for reply. I haven’t seen that auto-cleanup made it much worse the last days I tested it.

Sadly my Backups have often Errors due to cloud upload limit, 403.

But tbh yesterday I exported the 2 backup plans and deleted the whole Duplicati folder, restarted it and imported the backup plans again. I’ll check soon how the database recreation went and if a backup is possible and how large the folder is now. :slight_smile:

Well after several tries of repair and delete-restore I can’t get the backup working again.

Duplicati.Library.Interface.UserInformationException: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.

Please check About → Show log → Stored to see if you can find errors from DB recreation attempts.

With just enough retries it now says

Duplicati.Library.Interface.UserInformationException: Recreated database has missing blocks and 6 broken filelists. Consider using “list-broken-files” and “purge-broken-files” to purge broken data from the remote store and the database.

I started the purge about 12 hours ago and the status in the header is still

Starting backup …

Retries of what? Do you mean you get enough file transfer errors to need higher number-of-retries?
Do you mean you ran Recreate multiple times manually, and this is the first one that said anything?

Did you also do list-broken-files to see what purge-broken-files would do. Can you generally describe?

The wrong “Starting” message for what you’re doing seems a standard bug. My guess on that is here to:

PS: It’s a bit confusing that when you start a “repair” it says “Starting backup”…

While there are tempting status outputs from purge, I think this is purge command not purge-broken-files:

One can compare sources and look for the UpdatePhase lines. (this doesn’t have any such lines)

Yours does do some Logging.Log lines. Does About → Show log → Live → Information show activity?

How big and active are these, and where are they going? At Google Drive, 403 is at 750 GB daily upload (however it also does 403 at other random times for no reason that anybody has been able to figure out).

Retries of rebuilding the database, it mostly only showed the error mentioned in the 3rd post.

I didn’t list the broken files, I only started the purge.

I don’t mind the starting message, I just wonder how it would take 12 hours (and now even more) to delete 6 files.

I have no new Logs in “Live” other than the log which suggests to run “purge-broken-files”.
I’m tempted to cancel the current run and retry…

Yes 403 from Google Cloud with the 750GB limit. It’s just way too less considering my usecases with about 140TB data by now.

Those are typically shown via a popup and regular job logs. The other is in server log. You have to check.
Combine logs in GUI #1152 is an enhancement request to resolve the weird where-are-the-logs chasing.
If you’re looking and actually getting varying results and messages (all places) from Recreate, that’s odd.

That’s a lot. I hope you increased your blocksize from default 100 KB, or some things may get quite slow.

You can try Profiling level for more info. Sometimes SQL queries are slow for big backups & small blocks.

Starting a live log earlier might catch something. If you want a real file, use log-file=<path> at some kind of log-file-log-level. I suppose you could watch live log at Information and let Profiling make maybe a large file.
Verbose is in between.

1 Like

I cancelled the purge run yesterday and started the list-broken-files:

  1. Okt. 2021 15:00: Backend event: List - Started: ()

No more logs since then.
Although I had Router and Modem updates, so internet outtage, is it still running or might that have stopped the process?

I assume you mean Google Drive. Google Cloud Storage is a different thing. Wikipedia on Google Cloud.

Cloud Storage description at Google. Now is not the time, but it might be a less limited option to consider.
If you’re currently enjoying (except for the daily limit) an education account, note that change is coming…

More options for learning with Google Workspace for Education

What you are on also matters because you’re now asking about low-level networking to your destination.

Outages are typically detected as failures and retried, I don’t think it’s proven that they’re always detected.
It’s difficult to look inside a process. Is your system idle enough in general that Duplicati is the main user?
If so, you might be able to see if Task Manager’s Performance tab shows network activity, however if you want to look at activity just for Duplicati, you can try using Details tab, picking the child Duplicati, and using column selections to see if you can see anything. I’m not sure which category (if any) shows networking…
Sysinternals Process Explorer has more capabilities, and even has a “Disk and Network” tab for process.

1 Like

Yeah sorry, I meant Google Drive. And it’s my paid Business Account.

Considering the system ressources:

Also there is nothing from Duplicati in NetHogs in a few Minutes of monitoring.

(This is my headless Home Server and well enough setup to monitor every service individually, so these stats only affect Duplicati)

Might as well give up on the current list attempt by (not generally advised, but it’s stuck) process kill.
I kind of wonder how many files you have at destination, but list used to run unless you deconfigured it.

1 Like

Those are the stats in Duplicati for the affected Backup:

And the dblock-size is 50MB.

(Also worthwhile I probably have the same issue on another server)

Also I restarted the container now and it picked up the job again, it shows some more activity in monitoring now too. Probably need to wait a day again at least tho.

EDIT it probably picked up another job…

EDIT2 list worked, 6 broken files as it mentioned before. Now the purge is running. Do I need to repair the database afterwards again?

EDIT3 purge worked and backup started without running anything else before.

That’s good. Backup does some checks on destination and DB before it gets going, so I guess that passed.

1 Like

It’s almost 2 Million files it counted to backup FYI


The filelist mentioned is not a file. It’s a list of files, basically a backup version (of which you have lots).
If you had run list-broken-files the files and their backup versions would be seen. Did purge give detail?
Regardless, there’s some damage when there are missing blocks, but hopefully broken files are gone, courtesy of their deletion. If a file is still around, the next backup will see it as new, and get current view.

1 Like

The run didn’t finish. It currently stucks at:

Found 6 files that are missing from the remote storage, please run repair

I ran the repair and it tells the same a few minutes later. Probably need to run delete & restore option.

Found 6 files that are missing from the remote storage, please run repair

and it probably named the missing files in some log or potential log. If the job log didn’t come out (check), try About → Show log → Stored, however best chance of seeing names is About → Show log → Live → Warning during the try. The lack of progress from Repair suggests they might be dblock (source blocks).

What’s that exactly? Restore what? If you mean recreate the database, you might consider this instead:

Recovering by purging files

Yes, recreate the database. I already started it at that time.

I’m currently in a 403 again due to many uploads so I’ll see later if it works again.