So what are orphan files? and how to solve their existence issue?

I started receiving the message saying that there are files (in the destination) with hash failures and unexpected length (don’t record the error message exactly), so I moved them to another directory, and tried running purge-broken-files, but it failed saying “Unable to start the purge process as there are 33 orphan file(s). Return code: 100

I have searched on this forum, but couldn’t find anything meaningful about orphan files or what to do with them.

Any help is appreciated.
Thanks

P.s.: Attempting to run Repair also fails; here’s part the profiling log:


And the the details for the message showing on the 2nd row in the image is:
{"ClassName":"Duplicati.Library.Interface.UserInformationException","Message":"Repair not possible, missing 6 blocks.\nIf you want to continue working with the database, you can use the \"list-broken-files\" and \"purge-broken-files\" commands to purge the missing data from the database and the remote storage.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":" at Duplicati.Library.Main.Operation.RepairHandler.RunRepairRemote()","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":"8\nRunRepairRemote\nDuplicati.Library.Main, Version=2.0.3.5, Culture=neutral, PublicKeyToken=null\nDuplicati.Library.Main.Operation.RepairHandler\nVoid RunRepairRemote()","HResult":-2146233088,"Source":"Duplicati.Library.Main","WatsonBuckets":null}

I am in the same situation now, any tips :wink:

Unable to start the purge process as there are 261 orphan file(s)

I never got the answer, but here is what I usually try:
Browse the backup files, sorted by time of creation, then move all files newer than X to another directory, then attempt a repair or a rebuild. (X is the estimated time for when the problem started happening)
If the orphans still exist, choose an earlier time for X and try again.
Some times a repair and a rebuild would fail. In this case, I would try to restore an older copy of the database, and repeat the process above.

Unfortunately it happened to me 3 times that I lost all hope in a backup and had to do it again from scratch. I have about 8 jobs, so I get 8 times the chances of failure. But still… I believe have a solid full backup (as simple as making a RAR file) once in a while is a good counter measure.
The problems with my backups are caused by either a shutdown of my machine while the job is running, or a network disconnect from the remote target…

Manually alter or delete backend files could corrupt or destroy your backup data. A compact operation can combine data from older backup files to a new DBLOCK file. This results in a file with a recent creation date, but it contains data from older backups.

A safer way is to do a full recreate of the database.

First, click on your backup name and click “Database” under “Advanced …”. Make a copy of the .sqlite file (specified in the “Location” section).
Then click the button “Recreate (delete and repair)”. This will initiate a full database recreation without affecting your backup files.

Recreating the database solved this issue for me.

Are all of you on a Beta release? I’m curious if all the bug fixes in Canary would help, but because I’m not immediately able to point to a fix, I’m just gathering input now. One thing that Canary does fix is an issue where Recreate might slow down in the 70%-100% range on progress bar, downloading dblocks.

FYI the code that’s checking for orphan files looks like it defines them as files from a consolidated file table that somehow aren’t referenced in any backup (which references all the path names it contains):

Because Recreate goes by the names in the dlist files, it seems like it shouldn’t be possible to add any name that’s not in a backup, thereby preventing the problem. But that’s just theory, and I’m not expert.

If anybody is up for some database diving with DB Browser for SQLite, it might be interesting to see the actual names of the orphan files instead of just the count. I wonder why they are in no backup version? Typically files hang around for awhile, and get caught (maybe with changes) in a series of backup runs.