No more backup because of one missing file

The destination drive got disconnected yesterday, due to a power issue. And now I cannot back up anymore.

Error while running BK
Could not find file '\\?\R:\Duplicati\BK\duplicati-20220514T121650Z.dlist.zip.aes'.

The first time, I was told to Repair, which I did … only to end up with a message saying that Repair failed because of the missing file.

So I tried deleting the last few backups, and got this:

Running commandline entry
Finished!

            
  Listing remote folder ...
  Listing remote folder ...
  Listing remote folder ...
  Listing remote folder ...
  Listing remote folder ...

System.IO.FileNotFoundException: Could not find file '\\?\R:\Duplicati\BK\duplicati-20220514T121650Z.dlist.zip.aes'.
File name: '\\?\R:\Duplicati\BK\duplicati-20220514T121650Z.dlist.zip.aes'
   at Duplicati.Library.Main.BackendManager.List()
   at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable`1 protectedFiles)
   at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, IEnumerable`1 protectedFiles)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBackend, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.DeleteHandler.Run()
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.Delete()
   at Duplicati.CommandLine.Commands.Delete(TextWriter outwriter, Action`1 setup, List`1 args, Dictionary`2 options, IFilter filter)
   at Duplicati.CommandLine.Program.ParseCommandLine(TextWriter outwriter, Action`1 setup, Boolean& verboseErrors, String[] args)
   at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String[] args)
Return code: 100

So … one disconnection, and (1) all my backups are now inaccessible and (2) I cannot make another backup?

It might be disconnection with how you’re backing up with a problem of that choice. What I mean is that disconnection and ssh do not have any issue for me. But, disconnection and something like doing it by mapping a drive could have possible other issues. I backup between two computers and either side will go off at any time every single day without an issue.

Still if the power goes out, and its in the middle of writing a file, something is going to happen.

However, you can remove this question by improving your gear and adding UPS’s to both sides and the networking. Its better for them anyway. Power outage itself can cause some SSD drives to become corrupted.

Duplicati could likely also do further testing and hardening and providing a better way of dealing with it.

You can also delete the backup folder and make a clean one. Its good to do this at least once or twice a year anyway. Its an extra protection. If you can’t, they are supposed to be backups, not the only copy so you should be able to :slight_smile:

Also to cover the “large backup” problem like 1TB. You can backup large files a different way. You don’t need to have them continually thrown into a container for backup. I don’t backup anything large like that using Duplicati. You do not want to throw 1TB into containers.

You lost only the drive, but not the system running Duplicati (which could make a bigger mess)?

I’m not certain that the \\?\R: syntax can be a non-local drive, but it’s worth asking what is R:?

Is this all command-line? GUI would have additional logs (in two places, so it can be confusing).
You can look at your job log from command line using the GUI or with an external sqlitebrowser.
Best case would be you have a terminal scrollback or you enabled a log file at some good level.

In GUI, you can make a database bug report and post a link. Command line has a create-report.

No more backup because of one missing file

is not how things usually work. Duplicati repair can typically replace missing dindex and dlist files:

(intentionally deleted one dlist file)

Backup started at 5/15/2022 9:19:31 AM
Checking remote backup ...
  Listing remote folder ...
Missing file: duplicati-20220513T234141Z.dlist.zip
Found 1 files that are missing from the remote storage, please run repair
Fatal error => Found 1 files that are missing from the remote storage, please run repair

ErrorID: MissingRemoteFiles
Found 1 files that are missing from the remote storage, please run repair

(so I do)

  Listing remote folder ...
  Uploading file (1.73 KB) ..

(backup now works)

Although it would be best to know what went wrong in your case, that depends on what info you give.
Ideally you would set up a test backup and see if you can reproduce it reliably, and file an Issue on it.
Less ideally you have some more data on how this failed (the more the better, but default log is light).

If you prefer to get going again, one sure-fire (I think) way to stop the complaining about missing dlist would be to erase the records of what files to expect. Rename the database (for safety, in case result turns out worse), and do Database GUI Repair or command line repair command, to recreate the DB.

If no local database is found or the database is empty, the database is re-created with data from the storage.

I suppose another bit of missing data is to review your redacted option list to see why my repair works.
Using advanced options changes behaviors, though I can’t think offhand of one that causes this issue.
Database damage if Duplicati lost power too (did it?) seems more likely, so DB bug report might show.

I’ve been asking for volunteers to test (and code, and document, and help on forum…). Little response.
You’re welcome to help beat on it (with good instrumentation so results are useful). I’m beating it some.

Recently I got a kill test script goiing to see what it can hurt. Most are known issues awaiting developer.
Although it’s a different test than the report here, out of over 1500 kills, it ran over 175 missing file fixes courtesy of the repair command, after missing files were perceived, typically due to DB record losses in transaction rollback after killed compact. It did the delete but not its commit, so the DB forgot the delete.

This is where things stand in the most recent run (which is kind of slow because it does a DB recreation from backup files (as I’m suggesting here) as extra test that things are OK). This one’s a missing file fix:

Running backup at 2022-05-15 09:46:29.392084
Exit code 0
Running recreate at 2022-05-15 09:46:36.665994
Running repair at 2022-05-15 09:46:36.666971
Statistics: started: 1770 timeout: 637 missing: 80 extra: 33 marked: 0

Running backup at 2022-05-15 09:49:50.547392
Timed out after 7 seconds
Statistics: started: 1771 timeout: 638 missing: 80 extra: 33 marked: 0

Running backup at 2022-05-15 09:50:02.974685
Timed out after 12 seconds
Statistics: started: 1772 timeout: 639 missing: 80 extra: 33 marked: 0

Running backup at 2022-05-15 09:50:21.061797
Exit code 100
stderr was:

ErrorID: MissingRemoteFiles
Found 2 files that are missing from the remote storage, please run repair

Running repair at 2022-05-15 09:50:25.330887
Running recreate at 2022-05-15 09:50:31.619968
Running repair at 2022-05-15 09:50:31.620945
Statistics: started: 1773 timeout: 639 missing: 81 extra: 33 marked: 0

EDIT:

The sequencing of this is unclear. This is the last copy-and-paste, then it says “cannot make another backup”, but I’m guessing the copy-and-paste at top is where it ends. Was there another repair tried?
Look at my manual and automated test results to see how repair can fix most “missing file” problems.

OK, first of all, I must disclose that while I’m more tech-savvy than the average computer user, I’m not tech-savvy by the standards of this forum. My knowledge is more practical than technical; I just want things to work so I can focus on my work and, time allowing, on my life interests.

My Duplicati setup is a bit complicated due to lack of space on my computer (a 2015 notebook with a 256 Gb SSD). My Duplicati database is on the SSD, but the backup is on an encrypted folder in the cloud, and the cache for this backup (for the upload, to be precise) is on an external hard drive. This hard drive disconnected yesterday when I made the mistake of plugging one too many devices to the same USB hub. So what must have happened is that the file Duplicati was creating got corrupted.

But if the corruption of one file – among the >5,200 files (127 Gb) making up this accumulated backup – invalidates all the incremental backups (>70) and make further backups impossible, that’s a serious flaw. I tried repairing again. I tried to delete and repair the database. All to no avail.

More so than I imagined. Could you say what software does the caching, uploading, and makes R:?
There’s probably little chance (but probably > 0) that we can figure out exactly what happened there.

Generally it’s safest to have Duplicati do encrypted backups directly to a cloud folder, because it has better information on what got up and what didn’t, without wondering if file really made it onto remote.

In an actual loss-of-local-system one would want to know that all files are elsewhere, and if they went elsewhere in original creation order (as opposed to whatever the cache used), that’s somewhat safer.

This gives no clue as to what it did. Any information such as some of the many things I’ve suggested?
Yes, you just want things to work. They did not, so there’s a problem to be investigated. Will you help?
Posting the console output (if this is command line) is a start. console-log-level can increase if needed.

Could you say what software does the caching

pCloud.

Generally it’s safest to have Duplicati do encrypted backups directly to a cloud folder

It works as a cloud folder, and the folder didn’t disconnect. (It did happen in the past, but that proved not to be a problem: the files that still needed to be uploaded just waited until I got reconnected.) What disconnected yesterday was the hard drive with the local pCloud cache (used as a buffer for the files that are being uploaded).

Any information such as some of the many things I’ve suggested?

I tried those I could understand. As I mentioned, I’m not tech-savvy by the standards of this forum.

Posting the console output (if this is command line)

I tried running your command line using the “Run” function of Windows, but it didn’t work. So I guess you’re talking about Linux?

bugreport.zip (4.5 KB)

Thank you. From what I can tell:

How does pCloud Drive use the cache storage? describes that.

Where is my cache located? describes what I suppose you did to move the cache onto the external drive.

What’s not described (not surprisingly) is what can happen on disconnection. Even with an ordinary USB drive used for direct access, filesystem corruption can happen if disconnect happens while drive is active.

You no longer need to use Safely Remove Hardware when removing a USB drive on Windows 10
was a claim once made, but as you can see it received some skepticism. Regardless, the filesystem did not totally corrupt it appears, otherwise I guess Duplicati missing file error would not have been possible.

There’s still some question about what else may be wrong, and still a big gap on the current issues seen.
For the moment, let’s ignore corruption details, and try to collect some basic info on attempts and results.

I don’t think I posted one. I did post the output from an unposted command, and output from a test script.

“it didn’t work” is also prior problem I’m fighting – lack of specifics. Drive letter shows that it’s Windows:

Could not find file '\\?\R:

Linux and Windows both support GUI through web browser, GUI command line, and OS command line.

Using Duplicati from the Command Line is challenging (but some people prefer it) due to option volume.

Using the Command line tools from within the Graphical User Interface is maybe what you did to delete.
Options are easier here, because they’re carried in from the GUI job. You might also get a bit more info:

Return code: 100 for example. whereas GUI would only show a popup (error is red), and maybe logs.
Viewing the Duplicati Server Logs at About → Show log tends to get errors when an operation fully fails.
Viewing the log files of a backup job at <job> → Show log tends to get warnings and some small errors.

Either way, there’s usually something said, which you can retype (ouch), copy-and-paste, or screenshot.
A written description of what you do (step-by-step) and what errors occur is a helpful place to start from.

I think you’re doing what most users do (nothing wrong with that), which is GUI, and GUI Commandline when you have to do something GUI has no button for. The delete is one. Thinking more, the paste you previously posted clinches it. “Running commandline entry” means you’re using command line (in GUI).

Some operations (such as repair) have a GUI button, but can also be done in OS or GUI command line.
Note that GUI button is easy to push, but getting details beyond the popup is harder. Command line has errors right there to copy and paste, as you did, but takes more preparation to run. Duplicati prepares a screen to do a backup, then you have to change the Command at top and maybe remove excess options.

The BACKUP command is where you start in GUI Commandline.
The REPAIR command requires less, e.g. clear out source paths.
The LIST-BROKEN-FILES command is similar, and I think you ran it (at least the bug report shows runs)

image

Beyond that, there is no useful info. The list of files is empty. What I wanted was the bug report before database deletion – if old database is around you can put it back. I asked for rename. Did you delete? While there’s a GUI button for delete (without saving a copy…), the path to the database is shown too.

If database is truly gone, that’s more lost debug data, and we can only move forward with new issues.
Even if it’s not gone, it’s good for certain specific things, but does not replace your writeup and pastes.

If above image from DB bug report reflects what you did, what happened at each step? Ideally paste it.
If it’s all gone and you need another run, improving the logging level to at least Information would help.
GUI Commandline can do that with console-log-level. If you prefer, you can also edit job configuration.
log-file=<path> and log-file-log-level=Information is a good starter, but Verbose and Profiling get detail.
This will save you from having to run everything from GUI Commandline, because it will log either way.
An option for a fast peek at what’s going on is server log at About → Show log → Live → (pick a level).
This is reverse-chronological, so if you’re lucky there’s enough information at the end (top) to help out.
Sometimes one needs to click on an error to expand it though. So three ways to get some more detail.
Right now, there’s not even an error message said, and without any data at all, there’s little I can do…

Sorry for my sudden silence. I was already falling behind on my workload, trying to solve this problem and others, and then one of my colleagues got COVID, so I must shoulder part of this colleague’s workload too. I’m on a serious time crunch!


filesystem corruption can happen if disconnect happens while drive is active.

Yeah. I’m not surprised a file got corrupted; I just don’t understand why one corrupted file made the 5,000+ other files useless. At the very least, this corrupted file shouldn’t have prevented further backups. Yet it did, and in the end, I had to delete everything and make another backup from scratch (which took a couple of days and nights, because of the upload time).


You no longer need to use Safely Remove Hardware when removing a USB drive on Windows 10 was a claim once made, but as you can see it received some skepticism.

Since “Safely Remove Hardware and Eject Media” is still part of Windows 10, I’ll keep using it.


I don’t think I posted one. I did post the output from an unposted command, and output from a test script.

Sorry for not being clear, I was talking about this:


Linux and Windows both support GUI through web browser, GUI command line, and OS command line.

Ah, I see, I shouldn’t have used Run, I should have used Command Prompt. It’s been a while since I used it, and I was so tired it felt like I had to physically push through a mushy wall just to get a thought out, and so … I didn’t even remember about Command Prompt.


Using the Command line tools from within the Graphical User is maybe what you did to delete.

Ah, yes, sorry for forcing you to make sense of what I said. I usually express myself more clearly, but my mind hasn’t been clear at all lately.


Either way, there’s usually something said, which you can retype (ouch), copy-and-paste, or screenshot.

A written description of what you do (step-by-step) and what errors occur is a helpful place to start from.

Noted. I’ll remember – scratch that, I won’t – I’ll save your answer and a link to this thread for next time, hoping there won’t be a next time.


I think you’re doing what most users do (nothing wrong with that), which is GUI, and GUI Commandline when you have to do something GUI has no button for.

Correct once again.


Thinking more, the paste you previously posted clinches it. “Running commandline entry” means you’re using command line (in GUI).

Again, sorry for forcing you to reconstruct what I did from scattered clues. I should have taken screenshots. That’s what I usually do, but this time … Well, this time, I didn’t even think about it. I didn’t think very much at all, I realize, even though I did spend a lot of time trying to solve the issue. (When I should have been sleeping, which is part of the problem.)


What I wanted was the bug report before database deletion – if old database is around you can put it back. I asked for rename. Did you delete?

Yes. -_-"


log-file-log-level=Information is a good starter, but Verbose and Profiling get detail.

OK, done:

I suppose there’s a reason why Warning is the default, not Verbose or Profiling. What’s the reason?


An option for a fast peek at what’s going on is server log at About → Show log → Live → (pick a level).

None of the Live messages are recent enough (all are from today). Two of the Stored messages are from four days ago:

Sorry to hear, and I’m sure you didn’t need this new mystery. But at least your fresh start bypassed it.

If you ever run out of things to do, you could try breaking a test backup so you can get a nice look at it.
That may still put your primary backup at risk. I certainly don’t understand how the pCloud cache runs.

I didn’t design it, but Verbose is quite verbose, and Profiling is absurdly so. You’d see the low-level detail (intelligible only to an expert), but all the progress clues helpful for an ordinary user would scroll far away.

The issue with database being in use might be caused by auto-cleanup option. Best to manually repair.
Another possibility (suggested by the URL) is GUI request to delete the database while it’s being used.

I did two backups — one on my local hard drive, the other on pCloud — then restored both as a test. Based on file and size count, the local backup restored everything, whereas the online backup didn’t restore some browser and email-client files.

Both restores (even the local one) warned that they couldn’t patch some browser backup files because those files were in use, which is weird because they’re Duplicati backup files; only Duplicati should have any use for them. So it’s probably more a question of the type of file being backed up.

Anyway. It’ll have to do. I need to focus on my work; I haven’t managed to catch up over the weekend. :sweat:

Again, thank you for your help.