Failed: Found 10 remote files that are not recorded in local storage

I thought I would provide some feedback here because for some months and different versions of the Canary releases I’ve had one server, running Fedora 42, every few days reporting this same issue for one of its backups. However, this backup is using an SMB share on a local Windows 2025 server as its destination, not an offsite one as being reported above.

I waited until this morning for the backups to complete after updating to .105 and this is what it reported:

Failed: Found 10 remote files that are not recorded in local storage. This can be caused by having two backups sharing a destination folder which is not supported. It can also be caused by restoring an old database. If you are certain that only one backup uses the folder and you have the most updated version of the database, you can use repair to delete the unknown files.
Details: Duplicati.Library.Interface.RemoteListVerificationException: Found 10 remote files that are not recorded in local storage. This can be caused by having two backups sharing a destination folder which is not supported. It can also be caused by restoring an old database. If you are certain that only one backup uses the folder and you have the most updated version of the database, you can use repair to delete the unknown files.
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(IBackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable`1 protectedFiles, IEnumerable`1 strictExcemptFiles, Boolean logErrors, VerifyMode verifyMode, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(Options options, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(Options options, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)

The share is a unique folder dedicated to this server, so there is no sharing with another server. Basically the main share is \LISA\MAGGIE with two backups on this server, one uses \LISA\MAGGIE\DUPLICATI\LOCAL\ for its destination and this backup rarely fails and so far not like this, the second backup uses \LISA\MAGGIE\DUPLICATI\DP. The way I have things scheduled is that when the first backup finishes it starts the second one using an after job script. Basically, the second job just backs up the database of the first job.

If I run a repair then re-run the backup, it usually works but I’ve had times when it needs a second repair. In the case of the above, it worked after a single repair.

My prediction is that it will probably be fine

I just realised that the image shows 3 backups, only two of them use the share and the order is Local (scheduled), Wasabi (scripted), DP (scripted) so there is quite a gap between the two backups using the SMB share.

This issue reports “files that are missing from the remote storage”.

Your new report is “files that are not recorded in local storage”

That’s the opposite issue. Would you like me to move the report?

Ah yes, wasn’t paying attention so please move it, thanks.

Is this the typical error, seen on DP backup for several months, with prior backup success?

Prior backup matters because PreBackupVerify happens before backup, so checks prior.

Do you have any logs with the actual file names, so that history of missing can be checked?

In addition to job log, server log at About → Logs → Stored might have them.

Next step is

You can option a log-file, maybe at retry level so that the file uploading will be recorded.

Missing file names might be in About → Logs → Stored.

The job database usually has 30 days of history, but it’s easiest to search using DB browser.

DB bug report is sometimes also an option.

Ok, that’s a lot to go through.

It’s always the same error, just the number of files found changes.

None of the failed jobs are logged with the job, only the repairs are seen for each one:

I’ll look at setting a log option just for this job

No, no filenames are logged under Stored

I’ve generated a bug report for the job database.

Might be this serious (IMO) Canary issue:

Then maybe About → Show log → Live will show them even if you just run “Verify files”.

If you can get names, then you can look at file dates to guess what backup wrote them.

Maybe the the bug report will reveal names if you put it somewhere and post a link to it.

comes to mind, but that seems to require a delete failure, and I’m not certain you had it.

SMB has been trouble sometimes. Is this Fedora SMB or CIFS (aka SMB) Destination?

The job is running on Fedora server but the destination is a Windows 2025 Server share. I have other Linux machines using the same Windows Server for their destinations.

By destinations, I assume you mean Duplicati destinations.

What OS are the other machines running?

On Fedora, do you mount OS SMB, or use Duplicati SMB?

Still awaiting the other information. Nothing to look at so far.

All my Duplicati destinations are either a Windows Server 2025 share or a Wasabi bucket - I don’t use anything else. On my Linux machines, I have some Fedora and Debian, the jobs use the Duplicati SMB as I no longer mount them on the OS as being too unreliable and a p.i.t.a. to set up. This Fedora one is the only one repeatedly failing like this.

As soon as the job fails again I’ll have a log to provide - it worked today but I took a copy of the log so it can be compared to a failed one, in case that helps.

Thanks. Sadly, it sounds like OS SMB issues might have been traded for Duplicati SMB issues, although it’s still too early to say. Maybe some dev can speak to Duplicati SMB troubleshooting.

@ts678 it “finally” failed again this morning:

2025-10-27 09:25:29 +01 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Backup has started
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'synchronous=NORMAL'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'temp_store=MEMORY'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'journal_mode=WAL'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-65536'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'mmap_size=67108864'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'threads=8'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'shared_cache=true'.
2025-10-27 09:25:29 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  ()
2025-10-27 09:25:29 +01 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (227 bytes)
2025-10-27 09:25:29 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-b703614a9b97f42a0bbe98ceda8c8aae8.dblock.zip.aes
2025-10-27 09:25:29 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-bb9386baa90ce46c28f5c1cbb807e8601.dblock.zip.aes
2025-10-27 09:25:29 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-bebec2f38ad9d4d26a292e309a4f0abee.dblock.zip.aes
2025-10-27 09:25:29 +01 - [Warning-Duplicati.Library.Main.Operation.FilelistProcessor-ExtraUnknownFile]: Extra unknown file: duplicati-bfb244f76a0b04767ab7c6d1f893abc14.dblock.zip.aes
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'synchronous=NORMAL'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'temp_store=MEMORY'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'journal_mode=WAL'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-65536'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'mmap_size=67108864'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'threads=8'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'shared_cache=true'.
2025-10-27 09:25:29 +01 - [Error-Duplicati.Library.Main.Controller-FailedOperation]: The operation Backup has failed
Duplicati.Library.Interface.RemoteListVerificationException: Found 4 remote files that are not recorded in local storage. This can be caused by having two backups sharing a destination folder which is not supported. It can also be caused by restoring an old database. If you are certain that only one backup uses the folder and you have the most updated version of the database, you can use repair to delete the unknown files.
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(IBackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable`1 protectedFiles, IEnumerable`1 strictExcemptFiles, Boolean logErrors, VerifyMode verifyMode, CancellationToken cancellationToken)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(Options options, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(Options options, BackupResults result, IBackendManager backendManager)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IBackendManager backendManager, IFilter filter)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<<Backup>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Func`3 method)
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'synchronous=NORMAL'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'temp_store=MEMORY'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'journal_mode=WAL'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'cache_size=-65536'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'mmap_size=67108864'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'threads=8'.
2025-10-27 09:25:29 +01 - [Verbose-Duplicati.Library.SQLiteHelper.SQLiteLoader-CustomSQLiteOption]: Setting custom SQLite option 'shared_cache=true'.

I don’t think it’s saying much that helps

That’s because the log at failure doesn’t analyze history. That’s manual.
Duplicati might not even know the history, as the DB may roll back data.

What log file allows is checking history of those four files that are extra.
One can tell if they were from primary backup, occasional compact, etc.

It might also show some special event between creation and complaint.
Failures and unusual conditions might cause database to forget records.
Backups there without error are also interesting, as list is checked often.

You can also use your memory to think of any out of the ordinary events.
If the backup runs on a schedule, you can look to see if logs are around.
You can also check in server log at About → Logs → Stored for oddities.
Job logs get dropped easily, server log is better, log file is good, but work.

Regardless, there’s probably a historical cause, and now one can go look.
That might still come up empty, but usually data helps a lot with writing fix.

My opinion anyway. If the developers have other ideas, they can chime in.

Sometimes heavier log level is needed, but log file can grow painfully huge.
Here’s an example of how database transaction rollback data discard looks:

2025-10-14 08:49:17 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: Starting - CommitTransaction: Unnamed commit
2025-10-14 08:49:18 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: CommitTransaction: Unnamed commit took 0:00:00:00.082
2025-10-14 08:49:18 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: Starting - CommitTransaction: Unnamed commit
2025-10-14 08:49:18 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Unnamed commit]: CommitTransaction: Unnamed commit took 0:00:00:00.000
2025-10-14 08:49:18 -04 - [Profiling-Timer.Begin-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Starting - Rollback during transaction dispose
2025-10-14 08:49:18 -04 - [Profiling-Timer.Finished-Duplicati.Library.Main.Database.ReusableTransaction-Dispose]: Rollback during transaction dispose took 0:00:00:00.000

Transaction in DBMS gives the idea in two lines. A rollback or a crash without commit gets loss. Ordinarily this is a feature to avoid leaving half-done (inconsistent) work, but it’s tricky to do well.

Having concurrent processing makes it even harder. Sometimes timings just land in a “bad” way.
This is where hugely detailed logs help, but it’s pain to read, and is best left to developer experts.

The best case, also sometimes hard, is reliable repro, then let the experts study that themselves. Randomly happening problems are hard to analyze. Logs help. Heavy logs help more. We’ll see.

Again my opinion from looking at this sort of stuff. Duplicati devs probably have opinions as well.

EDIT 1:

My example logged both commit and rollback. I’m not saying this one went wrong, but some can.

I did switch it to “profile” after I ran the repair, so I now have a good backup as a start point.

The only thing that happened is that the server was rebooted this weekend, but no backups run during that time so the last one was on Friday.

I’m wondering if it’s the reboot, and it trashes something that hasn’t gotten flushed out hence when it’s restarted I get the error.

I’ll reboot it before tomorrow then see if it fails again.