"Failed to connect: The database file is locked database is locked" during "Deleting unwanted files …" never terminates

I’ve got Duplicati running a single backup job that has never completed. It has managed to copy all the files to the destination during the initial backup. Reading the logs in the backup at the Reporting: Show logs link hangs for a while and then shows a dialog with Failed to connect: The database file is locked database is locked.

As it stands I can successfully:

  • Start the container
  • Browse the backups
  • Verify the database
  • Start and get all the way through an incremental backup until the “Deleting unwanted files” phase.

Once it begins “Deleting unwanted files” it just takes up 25% of my system CPU perpetually spinning, never being able to resolve the database lock. Attempting to read the About > Logs or viewing the system logs from Docker shows the following error:

Mono.Data.Sqlite.SqliteException (0x80004005): The database file is locked
database is locked
  at Mono.Data.Sqlite.SQLite3.Prepare (Mono.Data.Sqlite.SqliteConnection cnn, System.String strSql, Mono.Data.Sqlite.SqliteStatement previous, System.UInt32 timeoutMS, System.String& strRemain) [0x001ba] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at Mono.Data.Sqlite.SqliteCommand.BuildNextCommand () [0x000d3] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at Mono.Data.Sqlite.SqliteCommand.GetStatement (System.Int32 index) [0x00008] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteCommand.GetStatement(int)
  at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x000b3] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x0004e] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader..ctor(Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
  at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at Mono.Data.Sqlite.SqliteConnection.Open () [0x00543] in <0e0c808ba41a4844814b82eb7ba357a3>:0 
  at Duplicati.Library.SQLiteHelper.SQLiteLoader.OpenSQLiteFile (System.Data.IDbConnection con, System.String path) [0x00021] in <0f5cd23b75654343bab04c48758cac91>:0 
  at Duplicati.Library.SQLiteHelper.SQLiteLoader.LoadConnection (System.String targetpath) [0x00048] in <0f5cd23b75654343bab04c48758cac91>:0 
  at Duplicati.Server.WebServer.RESTMethods.Backup.FetchRemoteLogData (Duplicati.Server.Serialization.Interface.IBackup backup, Duplicati.Server.WebServer.RESTMethods.RequestInfo info) [0x00006] in <30a34d71126b48248d040dda634ddad9>:0 
  at Duplicati.Server.WebServer.RESTMethods.Backup.GET (System.String key, Duplicati.Server.WebServer.RESTMethods.RequestInfo info) [0x001ed] in <30a34d71126b48248d040dda634ddad9>:0 
  at Duplicati.Server.WebServer.RESTHandler.DoProcess (Duplicati.Server.WebServer.RESTMethods.RequestInfo info, System.String method, System.String module, System.String key) [0x00146] in <30a34d71126b48248d040dda634ddad9>:0 

And:

System.ObjectDisposedException: Can not write to a closed TextWriter.
  at System.IO.StreamWriter.Flush (System.Boolean flushStream, System.Boolean flushEncoder) [0x00008] in <d636f104d58046fd9b195699bcb1a744>:0 
  at System.IO.StreamWriter.Flush () [0x00006] in <d636f104d58046fd9b195699bcb1a744>:0 
  at Duplicati.Server.WebServer.RESTHandler.DoProcess (Duplicati.Server.WebServer.RESTMethods.RequestInfo info, System.String method, System.String module, System.String key) [0x003bc] in <30a34d71126b48248d040dda634ddad9>:0 

I’m running version 2.0.7.1_beta_2023-05-25. I originally had it backing up everything on the system where the container is running so it would have eventually attempted to back up the in-use database file. I have since:

  1. Excluded the entire container path for Duplicati
  2. Moved the file to a different location (and set the dbpath flag).
  3. Moved the file back after it was having trouble reading the remote logs even before the backup was running.

If there’s more information I can provide, this is 100% reproducible with my current setup and I don’t expect that to change soon given I’ve read every search result about the quoted terms in the title and nothing really seemed to help.

Here’s the system info:

System properties

System properties

  • APIVersion : 1
  • PasswordPlaceholder : **********
  • ServerVersion : 2.0.7.1
  • ServerVersionName : - 2.0.7.1_beta_2023-05-25
  • ServerVersionType : Beta
  • StartedBy : Server
  • BaseVersionName : 2.0.7.1_beta_2023-05-25
  • DefaultUpdateChannel : Beta
  • DefaultUsageReportLevel : Information
  • ServerTime : 2023-08-25T07:17:33.950878-07:00
  • OSType : Linux
  • DirectorySeparator : /
  • PathSeparator : :
  • CaseSensitiveFilesystem : true
  • MonoVersion : 6.12.0.200
  • MachineName : 792b30785b1b
  • UserName : abc
  • NewLine :
  • CLRVersion : 4.0.30319.42000
  • CLROSInfo : {“Platform”:“Unix”,“ServicePack”:“”,“Version”:“5.19.17.0”,“VersionString”:“Unix 5.19.17.0”}
  • ServerModules :
  • UsingAlternateUpdateURLs : false
  • LogLevels : [“ExplicitOnly”,“Profiling”,“Verbose”,“Retry”,“Information”,“DryRun”,“Warning”,“Error”]
  • SuppressDonationMessages : false
  • SpecialFolders : [{“ID”:“%MY_DOCUMENTS%”,“Path”:“/config”},{“ID”:“%HOME%”,“Path”:“/config”}]
  • BrowserLocale : {“Code”:“en-US”,“EnglishName”:“English (United States)”,“DisplayName”:“English (United States)”}
  • SupportedLocales : [{“Code”:“bn”,“EnglishName”:“Bangla”,“DisplayName”:“বাংলা”},{“Code”:“ca”,“EnglishName”:“Catalan”,“DisplayName”:“català”},{“Code”:“cs”,“EnglishName”:“Czech”,“DisplayName”:“čeština”},{“Code”:“da”,“EnglishName”:“Danish”,“DisplayName”:“dansk”},{“Code”:“de”,“EnglishName”:“German”,“DisplayName”:“Deutsch”},{“Code”:“en”,“EnglishName”:“English”,“DisplayName”:“English”},{“Code”:“en-GB”,“EnglishName”:“English (United Kingdom)”,“DisplayName”:“English (United Kingdom)”},{“Code”:“es”,“EnglishName”:“Spanish”,“DisplayName”:“español”},{“Code”:“fi”,“EnglishName”:“Finnish”,“DisplayName”:“suomi”},{“Code”:“fr”,“EnglishName”:“French”,“DisplayName”:“français”},{“Code”:“fr-CA”,“EnglishName”:“French (Canada)”,“DisplayName”:“français (Canada)”},{“Code”:“hu”,“EnglishName”:“Hungarian”,“DisplayName”:“magyar”},{“Code”:“it”,“EnglishName”:“Italian”,“DisplayName”:“italiano”},{“Code”:“ja-JP”,“EnglishName”:“Japanese (Japan)”,“DisplayName”:“日本語 (日本)”},{“Code”:“ko”,“EnglishName”:“Korean”,“DisplayName”:“한국어”},{“Code”:“lt”,“EnglishName”:“Lithuanian”,“DisplayName”:“lietuvių”},{“Code”:“lv”,“EnglishName”:“Latvian”,“DisplayName”:“latviešu”},{“Code”:“nl-NL”,“EnglishName”:“Dutch (Netherlands)”,“DisplayName”:“Nederlands (Nederland)”},{“Code”:“pl”,“EnglishName”:“Polish”,“DisplayName”:“polski”},{“Code”:“pt”,“EnglishName”:“Portuguese”,“DisplayName”:“português”},{“Code”:“pt-BR”,“EnglishName”:“Portuguese (Brazil)”,“DisplayName”:“português (Brasil)”},{“Code”:“ro”,“EnglishName”:“Romanian”,“DisplayName”:“română”},{“Code”:“ru”,“EnglishName”:“Russian”,“DisplayName”:“русский”},{“Code”:“sk”,“EnglishName”:“Slovak”,“DisplayName”:“slovenčina”},{“Code”:“sk-SK”,“EnglishName”:“Slovak (Slovakia)”,“DisplayName”:“slovenčina (Slovensko)”},{“Code”:“sv-SE”,“EnglishName”:“Swedish (Sweden)”,“DisplayName”:“svenska (Sverige)”},{“Code”:“th”,“EnglishName”:“Thai”,“DisplayName”:“ไทย”},{“Code”:“zh-CN”,“EnglishName”:“Chinese (Simplified)”,“DisplayName”:“中文 (中国)”},{“Code”:“zh-HK”,“EnglishName”:“Chinese (Traditional, Hong Kong SAR China)”,“DisplayName”:“中文 (中国香港特别行政区)”},{“Code”:“zh-TW”,“EnglishName”:“Chinese (Traditional)”,“DisplayName”:“中文 (台湾)”}]
  • BrowserLocaleSupported : true
  • backendgroups : {“std”:{“ftp”:null,“ssh”:null,“webdav”:null,“openstack”:“OpenStack Object Storage / Swift”,“s3”:“S3 Compatible”,“aftp”:“FTP (Alternative)”},“local”:{“file”:null},“prop”:{“e2”:null,“s3”:null,“azure”:null,“googledrive”:null,“onedrive”:null,“onedrivev2”:null,“sharepoint”:null,“msgroup”:null,“cloudfiles”:null,“gcs”:null,“openstack”:null,“hubic”:null,“b2”:null,“mega”:null,“idrive”:null,“box”:null,“od4b”:null,“mssp”:null,“dropbox”:null,“sia”:null,“storj”:null,“tardigrade”:null,“jottacloud”:null,“rclone”:null,“cos”:null}}
  • GroupTypes : [“Local storage”,“Standard protocols”,“Proprietary”,“Others”]
  • Backend modules: aftp azure b2 box cloudfiles dropbox ftp file googledrive gcs hubic jottacloud mega msgroup onedrivev2 sharepoint openstack rclone s3 ssh od4b mssp sia storj tahoe tardigrade cos webdav e2
  • Compression modules: zip 7z
  • Encryption modules: aes gpg

Server state properties

  • lastEventId : 374
  • lastDataUpdateId : 12
  • lastNotificationUpdateId : 0
  • estimatedPauseEnd : 0001-01-01T00:00:00
  • activeTask : {“Item1”:8,“Item2”:“1”}
  • programState : Running
  • lastErrorMessage :
  • connectionState : connected
  • xsfrerror : false
  • connectionAttemptTimer : 0
  • failedConnectionAttempts : 0
  • lastPgEvent : {“BackupID”:“1”,“TaskID”:8,“BackendAction”:“Put”,“BackendPath”:“ParallelUpload”,“BackendFileSize”:1244938195,“BackendFileProgress”:1244938195,“BackendSpeed”:24431,“BackendIsBlocking”:false,“CurrentFilename”:null,“CurrentFilesize”:0,“CurrentFileoffset”:0,“CurrentFilecomplete”:false,“Phase”:“Backup_Delete”,“OverallProgress”:0,“ProcessedFileCount”:927666,“ProcessedFileSize”:4244879640226,“TotalFileCount”:927668,“TotalFileSize”:4244879640520,“StillCounting”:false}
  • updaterState : Waiting
  • updatedVersion :
  • updateReady : false
  • updateDownloadProgress : 0
  • proposedSchedule : [{“Item1”:“1”,“Item2”:“2023-08-28T08:00:00Z”}]
  • schedulerQueueIds :
  • pauseTimeRemain : 0

Welcome to the forum @danblakemore

There’s no such path, although the result sounds more like Viewing the log files of a backup job.
That would start at the job’s Show log. Logs are in job database, but it’s busy during the backup.
The other reason not to look for the job’s result log is that it isn’t there until its backup is finished.

Viewing the Duplicati Server Logs appears to be miswritten, but it’s aiming at About → Show log.
From there you can click either Server for failed operations that didn’t make a job log, or Live for scrolled updates on what’s going on. For your situation, Verbose might be good. Profiling is more.
Information is less, but if it’s actually Compacting files at the backend, you’ll see some files move.

The only mention of the database lock is when (it seems) you asked for logs from locked database.
Was there some situation where you see a complaint without trying to get some logs you can’t get?

I’m not sure this would have bothered Linux. I think Windows is stricter. Regardless, exclude the DB.
The active DB is instantly out of date with the destination if you grab it while its own backup is active.
Out of date DB is useless or worse. Can damage destination if put in use and then a Repair gets run.

You could also see if this event is stuck or if the Processed numbers are going up. That’s a big backup.
I’m hoping you set up a scaled up blocksize. 5 MB might be nice. Default gets slow after about 100 GB.

EDIT 1:

These statements sound contradictory. First says initial backup never finished. Second is incremental backup, suggesting initial backup finished and now incrementals are busy, possibly in a long compact.

EDIT 2:

This is confusing too. If it never finishes, how do you start over and hit it again? If the situation is that initial backup actually completed, some incrementals completed, then they stopped completing, that’s behavior that would suggest the wasted space built up and compact finally ran, but didn’t run too fast.

Long compacts usually happen when version deletions release blocks, and that’s based on retention.
no-auto-compact option can stop compacts. A manual compact can be run with Compact now button.
Compact has its own section in the job log, so you can see what sorts (if any) ran before. Some don’t repackage partially filled files into filled ones. An easier case is when whole file is deleted. Log shows.

EDIT 3:

No statement on I/O activity (or not). Database lock situation needs clarification, but may be unrelated. Spinning claim could be slow SQL if you’re on default block size (are you?). Profiling log might show it. Sometimes SQL causes no I/O because it has a memory cache. Other times, it may actually need I/O. SQLite is not threaded in a way that it can eat your whole CPU. If you have a quad, 25% is about right.

OS can also soak up I/O. I currently have a slow large backup with 14 TB of SQLite reads, and almost nothing is actually hitting the hard drive, fortunately. This is a USN issue, so it can’t be your situation…

Yes, I did mean About > Show log, sorry.

I’ve got a tab open with the verbose logs so we’ll see what we can see.

In the logs from the docker container. But I think these are actually old. Makes sense that the logs being in the DB would make them hard to read if the DB is in use.

Done, thanks for the advice.

I did not. I guess that’s why my 3TB backup took so long.

I have never seen it terminate after “Run now” or a scheduled run. I terminated the docker container in which duplicati is running and it hangs so eventually the container manager just pulls the plug after a timeout.

The “incremental” backup is me hitting “Run now” again when the state is “Last successful backup: Never” and noting that it is skipping all the existing remote files and only copying the new ones.

Kill container, restart container, rerun job. That said, I normally don’t just kill it for kicks. I have waited days for the backup to finish and it just spins with no logs in the live logs.

Currently the live logs are still printing “Including path as no filters matched: blah” and counting files. After it gets through reconciling it with the backend again, it will be in the delete phase and we’ll see if I missed a vital clue.

Yes, unfortunately. I didn’t change any of the default settings.

Yep, 4 real cores, 4 HT. It spikes one core and one hyperthread “core” at a time and moves around (presumably heat, blah blah). This is unlike when it’s counting which is nicely spread across many cores.


Thanks for your detailed reply and for hitting all the points, it’s helpful in figuring out which details matter and which don’t.

So once this backup once again gets to the final stage, I will see what’s in the logs. In the meantime, should I even be bothering to fiddle with this if I need to redo the backup to change the block size? Will the new “initial” backup be faster with a larger block size?

Thanks again.

Edit 1:


This is the last set of log entries stops over an hour ago. It’s just been “deleting” since then.

System performance:


It will stay like this until I kill the container, process jumping between cores.

Soooooooo it’s doing a lot of disk (or something is):
image
(bottom line is total, top two are parity)
Fairly sure there’s nothing else using this much disk (or any). So I guess I wait…

Try a live log at Profiling. Verbose won’t show SQL, but I’m not certain Profiling will show SQL that’s already happened, and it’s quite a flood when things move. A log file at profiling might be an option. Then use less.

It looks like you use a retention policy which does progressive thinning as backups age. What’s a bit weird looking is no delete requests going out. But I’m not sure what sort of activities might be happening before.

Need that profiling log (live or file). Here’s an example log from around your place on a far smaller backup:

2023-05-25 18:57:53 -04 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-BackupsToDelete]: Backups outside of all time frames and thus getting deleted: 
2023-05-25 18:57:53 -04 - [Information-Duplicati.Library.Main.Operation.DeleteHandler:RetentionPolicy-AllBackupsToDelete]: All backups to delete: 5/24/2023 6:21:47 PM, 5/24/2023 5:26:22 PM
2023-05-25 18:57:53 -04 - [Information-Duplicati.Library.Main.Operation.DeleteHandler-DeleteRemoteFileset]: Deleting 2 remote fileset(s) ...
2023-05-25 18:57:53 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Fileset" WHERE "Timestamp" IN (1684963582,1684966907) 
2023-05-25 18:57:53 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Fileset" WHERE "Timestamp" IN (1684963582,1684966907)  took 0:00:00:00.007
2023-05-25 18:57:53 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset")
2023-05-25 18:58:03 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset") took 0:00:00:09.282
2023-05-25 18:58:03 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset")
2023-05-25 18:58:03 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset") took 0:00:00:00.007
2023-05-25 18:58:03 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry") 
2023-05-25 18:58:03 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry")  took 0:00:00:00.501
2023-05-25 18:58:03 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "ID" NOT IN (SELECT DISTINCT "MetadataID" FROM "FileLookup") 
2023-05-25 18:58:03 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "ID" NOT IN (SELECT DISTINCT "MetadataID" FROM "FileLookup")  took 0:00:00:00.064
2023-05-25 18:58:03 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" NOT IN (SELECT DISTINCT "BlocksetID" FROM "FileLookup" UNION SELECT DISTINCT "BlocksetID" FROM "Metadataset") 
2023-05-25 18:58:03 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" NOT IN (SELECT DISTINCT "BlocksetID" FROM "FileLookup" UNION SELECT DISTINCT "BlocksetID" FROM "Metadataset")  took 0:00:00:00.118
2023-05-25 18:58:03 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset") 
2023-05-25 18:58:06 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset")  took 0:00:00:02.103
2023-05-25 18:58:06 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset") 
2023-05-25 18:58:06 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset")  took 0:00:00:00.080
2023-05-25 18:58:06 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: INSERT INTO "DeletedBlock" ("Hash", "Size", "VolumeID") SELECT "Hash", "Size", "VolumeID" FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" AS "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash") 
2023-05-25 18:58:06 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: INSERT INTO "DeletedBlock" ("Hash", "Size", "VolumeID") SELECT "Hash", "Size", "VolumeID" FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" AS "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash")  took 0:00:00:00.623
2023-05-25 18:58:06 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: DELETE FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash") 
2023-05-25 18:58:26 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: DELETE FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash")  took 0:00:00:19.471
2023-05-25 18:58:26 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: Starting - ExecuteNonQuery: UPDATE "RemoteVolume" SET "State" = "Deleting" WHERE "Type" = "Files" AND "State" IN ("Uploaded", "Verified", "Temporary") AND "ID" NOT IN (SELECT "VolumeID" FROM "Fileset") 
2023-05-25 18:58:26 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteNonQuery]: ExecuteNonQuery: UPDATE "RemoteVolume" SET "State" = "Deleting" WHERE "Type" = "Files" AND "State" IN ("Uploaded", "Verified", "Temporary") AND "ID" NOT IN (SELECT "VolumeID" FROM "Fileset")  took 0:00:00:00.206
2023-05-25 18:58:26 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: Starting - ExecuteReader: SELECT "Name", "Size" FROM "RemoteVolume" WHERE "Type" = "Files" AND "State" = "Deleting" 
2023-05-25 18:58:26 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ExtensionMethods-ExecuteReader]: ExecuteReader: SELECT "Name", "Size" FROM "RemoteVolume" WHERE "Type" = "Files" AND "State" = "Deleting"  took 0:00:00:00.000
2023-05-25 18:58:30 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.BackendManager-RemoteOperationDelete]: Starting - RemoteOperationDelete
2023-05-25 18:58:30 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Started: duplicati-20230524T212622Z.dlist.zip.aes (923.81 KB)
2023-05-25 18:58:31 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Delete - Completed: duplicati-20230524T212622Z.dlist.zip.aes (923.81 KB)

Yep. This is the default, I thought, though perhaps I changed it.

So the backup actually finished! :tada:

It had some errors in the profiling log but I failed to copy them before re-running (will when this next run finishes). I’m letting it go again and it has been doing the following since this morning (these are the last profiling log entries).

Aug 30, 2023 7:15 AM: Starting - ExecuteNonQuery: INSERT INTO "DeletedBlock" ("Hash", "Size", "VolumeID") SELECT "Hash", "Size", "VolumeID" FROM "Block" WHERE "ID" NOT IN (SELECT DISTINCT "BlockID" AS "BlockID" FROM "BlocksetEntry" UNION SELECT DISTINCT "ID" FROM "Block", "BlocklistHash" WHERE "Block"."Hash" = "BlocklistHash"."Hash")
Aug 30, 2023 7:15 AM: ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset") took 0:00:00:02.299
Aug 30, 2023 7:15 AM: Starting - ExecuteNonQuery: DELETE FROM "BlocklistHash" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset")
Aug 30, 2023 7:15 AM: ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset") took 0:00:02:21.692
Aug 30, 2023 7:13 AM: Starting - ExecuteNonQuery: DELETE FROM "BlocksetEntry" WHERE "BlocksetID" NOT IN (SELECT DISTINCT "ID" FROM "Blockset")
Aug 30, 2023 7:13 AM: ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" NOT IN (SELECT DISTINCT "BlocksetID" FROM "FileLookup" UNION SELECT DISTINCT "BlocksetID" FROM "Metadataset") took 0:00:00:03.313
Aug 30, 2023 7:13 AM: Starting - ExecuteNonQuery: DELETE FROM "Blockset" WHERE "ID" NOT IN (SELECT DISTINCT "BlocksetID" FROM "FileLookup" UNION SELECT DISTINCT "BlocksetID" FROM "Metadataset")
Aug 30, 2023 7:13 AM: ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "ID" NOT IN (SELECT DISTINCT "MetadataID" FROM "FileLookup") took 0:00:00:01.583
Aug 30, 2023 7:13 AM: Starting - ExecuteNonQuery: DELETE FROM "Metadataset" WHERE "ID" NOT IN (SELECT DISTINCT "MetadataID" FROM "FileLookup")
Aug 30, 2023 7:13 AM: ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry") took 0:00:00:03.720
Aug 30, 2023 7:13 AM: Starting - ExecuteNonQuery: DELETE FROM "FileLookup" WHERE "ID" NOT IN (SELECT DISTINCT "FileID" FROM "FilesetEntry")
Aug 30, 2023 7:13 AM: ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset") took 0:00:00:00.000
Aug 30, 2023 7:13 AM: Starting - ExecuteNonQuery: DELETE FROM "ChangeJournalData" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset")
Aug 30, 2023 7:13 AM: ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset") took 0:00:00:21.881
Aug 30, 2023 7:12 AM: Starting - ExecuteNonQuery: DELETE FROM "FilesetEntry" WHERE "FilesetID" NOT IN (SELECT DISTINCT "ID" FROM "Fileset")
Aug 30, 2023 7:12 AM: ExecuteNonQuery: DELETE FROM "Fileset" WHERE "Timestamp" IN (1692761624) took 0:00:00:00.000
Aug 30, 2023 7:12 AM: Starting - ExecuteNonQuery: DELETE FROM "Fileset" WHERE "Timestamp" IN (1692761624)
Aug 30, 2023 7:12 AM: Deleting 1 remote fileset(s) ...
Aug 30, 2023 7:12 AM: All backups to delete: 8/22/2023 8:33:44 PM
Aug 30, 2023 7:12 AM: Backups outside of all time frames and thus getting deleted:
Aug 30, 2023 7:12 AM: Backups in this time frame:
Aug 30, 2023 7:12 AM: Next time frame and interval pair: 365.00:00:00 / 31.00:00:00
Aug 30, 2023 7:12 AM: Deleting backup: 8/22/2023 8:33:44 PM
Aug 30, 2023 7:12 AM: Keeping backup: 8/20/2023 8:01:22 AM
Aug 30, 2023 7:12 AM: Keeping backup: 8/10/2023 4:33:19 AM
Aug 30, 2023 7:12 AM: Keeping partial backup: 8/5/2023 3:16:19 PM
Aug 30, 2023 7:12 AM: Keeping partial backup: 8/5/2023 3:16:18 PM
Aug 30, 2023 7:12 AM: Backups in this time frame: 8/5/2023 3:16:18 PM, 8/5/2023 3:16:19 PM, 8/10/2023 4:33:19 AM, 8/20/2023 8:01:22 AM, 8/22/2023 8:33:44 PM

So I guess it’s not a hard issue with my setup/configuration but instead a consequence of my backup size. Is it worth re-doing the backup to increase the blocksize? Is that what’s causing these deletes to take hours?

Deleting a backup version runs the below, and it’s working its way through. The largest tables are probably Block (which scales by block count, which scales by backup size and inversely by the blocksize setting), and BlocksetEntry with similar scaling behavior, except it enumerates the blocks for unique file versions. Basically, storage of blocks is deduplicated, but something has to track all the content of every file present. The line it’s currently working on involves both of those very large tables.

Pretty-printing the presumably busy-but-might-finish-sometime SQL statement makes it easier to read:

INSERT INTO "DeletedBlock" (
	"Hash"
	,"Size"
	,"VolumeID"
	)
SELECT "Hash"
	,"Size"
	,"VolumeID"
FROM "Block"
WHERE "ID" NOT IN (
		SELECT DISTINCT "BlockID" AS "BlockID"
		FROM "BlocksetEntry"
		
		UNION
		
		SELECT DISTINCT "ID"
		FROM "Block"
			,"BlocklistHash"
		WHERE "Block"."Hash" = "BlocklistHash"."Hash"
		)

DeletedBlock tracks wasted space, meaning blocks that are no longer in any backup version and that compact will eventually clean up, however killing Duplicati early might be allowing the waste to build up.

Cleanup is progressive. BlocksetEntry is already cleaned up (taking 2+) hours, so this query looks at what remains and moves waste to DeletedBlock. The other reason a block may still be in use is as a blocklist which enumerates the blocks of every file that has multiple blocks, sort of like BlocksetEntry.

How the backup process works explains more, and all the explanations given omitted some fine details.

When I mentioned table scaling earlier, that was scaling of table size. Processing rate can scale worse, when table size increases. Some is computational, but some is also memory caches filling, causing I/O. There’s recently an environment variable to increase SQLite’s cache, but unless you’re really interested in further analysis (if so, other ideas exist), might be best to redo backup with a more suitable blocksize.

Drawback is you either have a fresh start or you use a rather experimental method to expand blocksize.

Probably. One way to tell for sure.

EDIT:

The join in the bottom SELECT looks potentially slow due to lack of relevant table indexing, meaning it might have to do linear scans through huge tables. You could maybe just run that with sqlitebrowser (after killing the container) or some other favored tool that will say how long an SQL statement takes…

P.S. Perhaps not that bad. EXPLAIN QUERY at least shows it using a covering index. Point is that it’s in theory possible to track down slowness – it just takes a lot of analysis on a system showing the issue.

image

I’m not sure I understand; do you mean by that the following line:

Aug 30, 2023 7:15 AM: ExecuteNonQuery: DELETE FROM “BlocksetEntry” WHERE “BlocksetID” NOT IN (SELECT DISTINCT “ID” FROM “Blockset”) took 0:00:02:21.692

for me it’s 2 minutes…

Thanks for the correction. I was probably looking for:

Maybe that refers to INSERT that was still running last we heard? Is there actually any slow DELETE?

Anyway, something’s slow, seemingly in that unfinished profiled statement. An easy way to look for any database changes is to check size and timestamps on your database and any files with a similar name. There’s generally a -journal file (extra string on right) that is kept in case a rollback is required later…

One issue I see sometimes, possibly due to cache overflows, is reading through SQLite temporary files whose names begin with etilqs, however I see those on Windows in Sysinternals Process Monitor, so Linux might have to use strace on mono or something awful. I don’t know if these files are easily visible on Linux. On Windows they have the hidden attribute, so need a C:\Windows\Temp>dir /ah etilqs* or PowerShell PS C:\Windows\Temp> Get-Item -Force etilqs_*. If any findable on Linux, ls -lu --full-time might show the last access unless it’s memory-mapped. Windows just does lots of reads.

Here’s the log from the latest completion. Chosen exerpt “Delete Phase… Duration 1 day(s) and 05:40:59”:

Complete log
{
  "DeletedFiles": 0,
  "DeletedFolders": 0,
  "ModifiedFiles": 6,
  "ExaminedFiles": 927993,
  "OpenedFiles": 96,
  "AddedFiles": 90,
  "SizeOfModifiedFiles": 21476677121,
  "SizeOfAddedFiles": 13290643,
  "SizeOfExaminedFiles": 4245005644599,
  "SizeOfOpenedFiles": 21489967764,
  "NotProcessedFiles": 0,
  "AddedFolders": 81,
  "TooLargeFiles": 0,
  "FilesWithError": 0,
  "ModifiedFolders": 0,
  "ModifiedSymlinks": 0,
  "AddedSymlinks": 0,
  "DeletedSymlinks": 0,
  "PartialBackup": false,
  "Dryrun": false,
  "MainOperation": "Backup",
  "CompactResults": {
    "DeletedFileCount": 36,
    "DownloadedFileCount": 0,
    "UploadedFileCount": 0,
    "DeletedFileSize": 944092516,
    "DownloadedFileSize": 0,
    "UploadedFileSize": 0,
    "Dryrun": false,
    "VacuumResults": null,
    "MainOperation": "Compact",
    "ParsedResult": "Success",
    "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
    "EndTime": "2023-08-31T19:53:38.703518Z",
    "BeginTime": "2023-08-31T19:46:31.463035Z",
    "Duration": "00:07:07.2404830",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null,
    "BackendStatistics": {
      "RemoteCalls": 105,
      "BytesUploaded": 1668923363,
      "BytesDownloaded": 130847399,
      "FilesUploaded": 63,
      "FilesDownloaded": 3,
      "FilesDeleted": 37,
      "FoldersCreated": 0,
      "RetryAttempts": 0,
      "UnknownFileSize": 0,
      "UnknownFileCount": 0,
      "KnownFileCount": 154267,
      "KnownFileSize": 4045158822047,
      "LastBackupDate": "2023-08-30T05:32:52-07:00",
      "BackupListCount": 7,
      "TotalQuotaSpace": 0,
      "FreeQuotaSpace": 0,
      "AssignedQuotaSpace": -1,
      "ReportedQuotaError": false,
      "ReportedQuotaWarning": false,
      "MainOperation": "Backup",
      "ParsedResult": "Success",
      "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
      "EndTime": "0001-01-01T00:00:00",
      "BeginTime": "2023-08-30T12:32:52.919624Z",
      "Duration": "00:00:00",
      "MessagesActualLength": 0,
      "WarningsActualLength": 0,
      "ErrorsActualLength": 0,
      "Messages": null,
      "Warnings": null,
      "Errors": null
    }
  },
  "VacuumResults": null,
  "DeleteResults": {
    "DeletedSetsActualLength": 1,
    "DeletedSets": [
      {
        "Item1": 3,
        "Item2": "2023-08-22T20:33:44-07:00"
      }
    ],
    "Dryrun": false,
    "MainOperation": "Delete",
    "CompactResults": {
      "DeletedFileCount": 36,
      "DownloadedFileCount": 0,
      "UploadedFileCount": 0,
      "DeletedFileSize": 944092516,
      "DownloadedFileSize": 0,
      "UploadedFileSize": 0,
      "Dryrun": false,
      "VacuumResults": null,
      "MainOperation": "Compact",
      "ParsedResult": "Success",
      "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
      "EndTime": "2023-08-31T19:53:38.703518Z",
      "BeginTime": "2023-08-31T19:46:31.463035Z",
      "Duration": "00:07:07.2404830",
      "MessagesActualLength": 0,
      "WarningsActualLength": 0,
      "ErrorsActualLength": 0,
      "Messages": null,
      "Warnings": null,
      "Errors": null,
      "BackendStatistics": {
        "RemoteCalls": 105,
        "BytesUploaded": 1668923363,
        "BytesDownloaded": 130847399,
        "FilesUploaded": 63,
        "FilesDownloaded": 3,
        "FilesDeleted": 37,
        "FoldersCreated": 0,
        "RetryAttempts": 0,
        "UnknownFileSize": 0,
        "UnknownFileCount": 0,
        "KnownFileCount": 154267,
        "KnownFileSize": 4045158822047,
        "LastBackupDate": "2023-08-30T05:32:52-07:00",
        "BackupListCount": 7,
        "TotalQuotaSpace": 0,
        "FreeQuotaSpace": 0,
        "AssignedQuotaSpace": -1,
        "ReportedQuotaError": false,
        "ReportedQuotaWarning": false,
        "MainOperation": "Backup",
        "ParsedResult": "Success",
        "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
        "EndTime": "0001-01-01T00:00:00",
        "BeginTime": "2023-08-30T12:32:52.919624Z",
        "Duration": "00:00:00",
        "MessagesActualLength": 0,
        "WarningsActualLength": 0,
        "ErrorsActualLength": 0,
        "Messages": null,
        "Warnings": null,
        "Errors": null
      }
    },
    "ParsedResult": "Success",
    "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
    "EndTime": "2023-08-31T19:53:38.703519Z",
    "BeginTime": "2023-08-30T14:12:39.781462Z",
    "Duration": "1.05:40:58.9220570",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null,
    "BackendStatistics": {
      "RemoteCalls": 105,
      "BytesUploaded": 1668923363,
      "BytesDownloaded": 130847399,
      "FilesUploaded": 63,
      "FilesDownloaded": 3,
      "FilesDeleted": 37,
      "FoldersCreated": 0,
      "RetryAttempts": 0,
      "UnknownFileSize": 0,
      "UnknownFileCount": 0,
      "KnownFileCount": 154267,
      "KnownFileSize": 4045158822047,
      "LastBackupDate": "2023-08-30T05:32:52-07:00",
      "BackupListCount": 7,
      "TotalQuotaSpace": 0,
      "FreeQuotaSpace": 0,
      "AssignedQuotaSpace": -1,
      "ReportedQuotaError": false,
      "ReportedQuotaWarning": false,
      "MainOperation": "Backup",
      "ParsedResult": "Success",
      "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
      "EndTime": "0001-01-01T00:00:00",
      "BeginTime": "2023-08-30T12:32:52.919624Z",
      "Duration": "00:00:00",
      "MessagesActualLength": 0,
      "WarningsActualLength": 0,
      "ErrorsActualLength": 0,
      "Messages": null,
      "Warnings": null,
      "Errors": null
    }
  },
  "RepairResults": null,
  "TestResults": {
    "MainOperation": "Test",
    "VerificationsActualLength": 3,
    "Verifications": [
      {
        "Key": "duplicati-20230829T002827Z.dlist.zip.aes",
        "Value": []
      },
      {
        "Key": "duplicati-i4f7ca10ba7b34541a4020bc8518ab3a5.dindex.zip.aes",
        "Value": []
      },
      {
        "Key": "duplicati-b8e74a53bc5ca4cc99d582d2d86a5dadd.dblock.zip.aes",
        "Value": []
      }
    ],
    "ParsedResult": "Success",
    "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
    "EndTime": "2023-08-31T19:55:39.836678Z",
    "BeginTime": "2023-08-31T19:54:15.796028Z",
    "Duration": "00:01:24.0406500",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null,
    "BackendStatistics": {
      "RemoteCalls": 105,
      "BytesUploaded": 1668923363,
      "BytesDownloaded": 130847399,
      "FilesUploaded": 63,
      "FilesDownloaded": 3,
      "FilesDeleted": 37,
      "FoldersCreated": 0,
      "RetryAttempts": 0,
      "UnknownFileSize": 0,
      "UnknownFileCount": 0,
      "KnownFileCount": 154267,
      "KnownFileSize": 4045158822047,
      "LastBackupDate": "2023-08-30T05:32:52-07:00",
      "BackupListCount": 7,
      "TotalQuotaSpace": 0,
      "FreeQuotaSpace": 0,
      "AssignedQuotaSpace": -1,
      "ReportedQuotaError": false,
      "ReportedQuotaWarning": false,
      "MainOperation": "Backup",
      "ParsedResult": "Success",
      "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
      "EndTime": "0001-01-01T00:00:00",
      "BeginTime": "2023-08-30T12:32:52.919624Z",
      "Duration": "00:00:00",
      "MessagesActualLength": 0,
      "WarningsActualLength": 0,
      "ErrorsActualLength": 0,
      "Messages": null,
      "Warnings": null,
      "Errors": null
    }
  },
  "ParsedResult": "Warning",
  "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
  "EndTime": "2023-08-31T19:55:40.149713Z",
  "BeginTime": "2023-08-30T12:32:52.919621Z",
  "Duration": "1.07:22:47.2300920",
  "MessagesActualLength": 222,
  "WarningsActualLength": 10,
  "ErrorsActualLength": 0,
  "Messages": [
    "2023-08-30 05:32:52 -07 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started",
    "2023-08-30 05:36:54 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()",
    "2023-08-30 05:37:10 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (150.63 KB)",
    "2023-08-30 06:40:47 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-ba8b9633df7654d03b5fa258a21466893.dblock.zip.aes (49.92 MB)",
    "2023-08-30 06:40:52 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-bf383ce69da534169a9e820bf1a741628.dblock.zip.aes (49.91 MB)",
    "2023-08-30 06:41:00 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b14ee30b999334e83a5362e371fa95a77.dblock.zip.aes (49.92 MB)",
    "2023-08-30 06:41:03 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b5a675d644ec34d17ba85b3e4dcddcd00.dblock.zip.aes (49.91 MB)",
    "2023-08-30 06:42:20 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-ba8b9633df7654d03b5fa258a21466893.dblock.zip.aes (49.92 MB)",
    "2023-08-30 06:42:38 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i17ff0fbd54cc4211ab4855bbb7e196bb.dindex.zip.aes (165.64 KB)",
    "2023-08-30 06:42:39 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i17ff0fbd54cc4211ab4855bbb7e196bb.dindex.zip.aes (165.64 KB)",
    "2023-08-30 06:43:49 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b9ec8ef17ce214bb7aa7cc3591455e506.dblock.zip.aes (49.91 MB)",
    "2023-08-30 06:44:37 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b14ee30b999334e83a5362e371fa95a77.dblock.zip.aes (49.92 MB)",
    "2023-08-30 06:44:37 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-if59d942989f14ce686105e1bdb199fd5.dindex.zip.aes (265.92 KB)",
    "2023-08-30 06:44:38 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-if59d942989f14ce686105e1bdb199fd5.dindex.zip.aes (265.92 KB)",
    "2023-08-30 06:44:39 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-b60985f36740544b192edc36b3dab775b.dblock.zip.aes (49.92 MB)",
    "2023-08-30 06:45:21 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-bf383ce69da534169a9e820bf1a741628.dblock.zip.aes (49.91 MB)",
    "2023-08-30 06:45:22 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-i71f78c2eb4754e9fb3a4618c9e88c4ed.dindex.zip.aes (265.43 KB)",
    "2023-08-30 06:45:23 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-i71f78c2eb4754e9fb3a4618c9e88c4ed.dindex.zip.aes (265.43 KB)",
    "2023-08-30 06:45:23 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Started: duplicati-ba856b2eef47f462fa449b027898a4aab.dblock.zip.aes (49.94 MB)",
    "2023-08-30 06:46:15 -07 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Completed: duplicati-b9ec8ef17ce214bb7aa7cc3591455e506.dblock.zip.aes (49.91 MB)"
  ],
  "Warnings": [
    "2023-08-30 05:37:16 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/zerotier/zerotier-one/controller.d/",
    "2023-08-30 05:37:16 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/zerotier/zerotier-one/controller.d/",
    "2023-08-30 05:37:16 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-PathProcessingFailed]: Failed to process path: /source/mnt/user/appdata/zerotier/zerotier-one/authtoken.secret",
    "2023-08-30 05:37:16 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileBlockProcessor.FileEntry-PathProcessingFailed]: Failed to process path: /source/mnt/user/appdata/zerotier/zerotier-one/identity.secret",
    "2023-08-30 05:37:16 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/zerotier/zerotier-one/controller.d/",
    "2023-08-30 05:37:16 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/zerotier/zerotier-one/controller.d/",
    "2023-08-30 05:39:19 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/meem/var/lib/postgresql/data/",
    "2023-08-30 05:39:19 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/meem/var/lib/postgresql/data/",
    "2023-08-30 05:40:41 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/meem/var/lib/postgresql/data/",
    "2023-08-30 05:40:41 -07 - [Warning-Duplicati.Library.Main.Operation.Backup.FileEnumerationProcess-FileAccessError]: Error reported while accessing file: /source/mnt/user/appdata/meem/var/lib/postgresql/data/"
  ],
  "Errors": [],
  "BackendStatistics": {
    "RemoteCalls": 105,
    "BytesUploaded": 1668923363,
    "BytesDownloaded": 130847399,
    "FilesUploaded": 63,
    "FilesDownloaded": 3,
    "FilesDeleted": 37,
    "FoldersCreated": 0,
    "RetryAttempts": 0,
    "UnknownFileSize": 0,
    "UnknownFileCount": 0,
    "KnownFileCount": 154267,
    "KnownFileSize": 4045158822047,
    "LastBackupDate": "2023-08-30T05:32:52-07:00",
    "BackupListCount": 7,
    "TotalQuotaSpace": 0,
    "FreeQuotaSpace": 0,
    "AssignedQuotaSpace": -1,
    "ReportedQuotaError": false,
    "ReportedQuotaWarning": false,
    "MainOperation": "Backup",
    "ParsedResult": "Success",
    "Version": "2.0.7.1 (2.0.7.1_beta_2023-05-25)",
    "EndTime": "0001-01-01T00:00:00",
    "BeginTime": "2023-08-30T12:32:52.919624Z",
    "Duration": "00:00:00",
    "MessagesActualLength": 0,
    "WarningsActualLength": 0,
    "ErrorsActualLength": 0,
    "Messages": null,
    "Warnings": null,
    "Errors": null
  }
}

Probably going to restore to a colocated machine and re-back-up the data with the same config except a larger block size. It would be cool if the tooltip for that explained some of the time-domain performance impact of choosing various sizes.

Thanks again for all the help and patience.

1 Like

That’s a very baffling long time.
I’d expect compact taking most of the time for a delete, but that does not seem to be the case.

If you could sometime invalidate purging of your backup (that is, set backup retention to keep all backups), and then run the delete of the version you’d expect to be removed manually (you can do that by exporting the job as command line, remove the filters and the destination, add --version=(version to delete) and --log-file=/path/to/filelog and --log-file-log-level=profiling, then posting here the log file), it would be appreciated because I have absolutely no idea what could take so long in deleting a backup version.

Although it was a milder too-many-blocks situation, I think we’ve fought fileset deletion code before.

Reducing Time Spent Deleting Blocks whose parent topic also had us increasing the SQLite cache.

The current investigation has done less at the SQL level, but some testing there would seem useful.
There was a suggestion for manual test for part of the slow query, but profiling log may also show it.

presumably finished eventually, but we don’t know how long, or which parts of that were slow spots.

for a big backup (here we have 4TB) I’d expect minutes yes, not hours.