Backup job fails with exclude filters set


#1

I set up two backup jobs of a few GB and ran them successfully. Then I ventured into the big thing: A ~14GB job with 4 exclude filters set (MP3, WAV, FLAC MP4). Destination is on the same cloud drive as the others.
This job repeatedly failed with an error message of “345 files missing”. Neither “Repair” nor “restore (delete and repair)” nor deleting the job altogether (including local DB) and setting it up from scratch helped. The repair actions got stuck after ~10% (guessing by the progress bar) and I aborted them after waiting for some 60 mins. BTW the number of to-be-excluded files in the source tree is ~1300, so not related to the 345 reported ( I was just curious to see if tehre was any relation).
I tried a (tiny littlle) test job with same exclude filters which ran OK.
I appreciate any hints for troubleshooting.


#2

If you look at the logs you should find a list of specific files it says are missing.

My guess is they’ll be destination files that they local database things should be found in your cloud drive that aren’t there.

Or are they…recently one or two providers seem to have made changes causing the list of files being returned to Duplicati to be incorrect (partial or duplicated content).

Once you’ve found a log entry of at least one specific file declared missing, try connecting to your destination outside of Support and see if the reported missing file is there or not.

If there, then it’s likely a destination communication error. If NOT there, then it could be actual missing (or moved / renamed) files / paths or possibly a job configuration issue (such as changing the destination path).


#3

Thanks for the quick reply. Unfortunately I can’t read anything useful out of the error log (BTW System lang is German, so “bei” is probably “at”):

Duplicati.Library.Interface.UserInformationException: Found 344 files that are missing from the remote storage, please run repair 
  bei Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList(BackendManager backend, Options options, LocalDatabase database, IBackendWriter log, String protectedfile) 
  bei Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
  bei Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter) 
  bei Duplicati.Library.Main.Controller.<>c__DisplayClass16_0.<Backup>b__0(BackupResults result)
  bei Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
  bei Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
  bei Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

Can you help we with this?

What do you mean by “connecting to your destinaton outside of Support…”? On the destination drive I see only the encrypted zip files in 50MB chunks. How could I possibly find out which (source) files are missing from those cuhnks?


#4

Oops - by that I mean “stupid auto-correct mobile device keyboard!!!”. Sorry about that. :blush:

What Duplicati is complaining about missing aren’t source files, but the zip files themselves. My guess is if you count the ones on the destination there will be right around 344 of them.

In fact, there are probably 344 individual messages logged in Duplicati listing the individual files it says it can’t find.

If you can confirm that at least one of those individually reported missing files actually exists on the destination, then most likely Duplicati is not hitting the right destination folder.

If you can NOT find any of the individually reporting missing files on the destination, then either you’re manually looking in the wrong destination or the files really are missing.


#5

My guess is if you count the ones on the destination there will be right around 344 of them.

on the spot - I have a total of 345 files (172 50MB chunks + same number of dindex file + 1 dlist file) in the destination folder. However I still can’t make sense out of the error message (See previous post).
Before starting over with a new version of the job I deleted the destination folder (actually: renamed and set up a new folder with name as specified in the job) and the very first run placed those 345 files into the (newly created) destination folder before giving up. So indeed it DID write those 345 files – but then aborted with the message that 345 files are missing …


#6

OK. Bear with me while I try to make sure I’ve got this right…

  1. You had 2 jobs going to the same destination (I assume into different folders) that worked, but there were smaller (test?) jobs
  2. You set up a 3rd big job to the same destination (into a 3rd folder?) and it ran (at least partially) once then started reporting “345 files missing” errors
  3. You tried database “Repair” and “Restore (delete and repair)”, but they both failed (as expected since the destination was missing likely all it’s files)
  4. You deleted the job and created from scratch trying to go into the same destination folder but it ALSO ran (at least partially) once then started reporting “345 files missing” errors

Is all that correct?


#7

Sorry for the late reply, I ran some extensive test in the meantime.
re 1: yes, and different folders
re 2: yes, same cloud share, but different folder. Ran up to error message of missing files (which were actually there!)
re 3: yes
re 4: in the meantime I tried to approach the big job with several smaller ones, which all ran OK, for example: source (10797 files, ~33GB) with filters excluding (by file type) most of them (10088 files of ~31GB) - this ran OK
However then I set up a new job with new name and all (after deleting the previous one: config + local DB + remote data) of a larger size (17395 files, 79,56GB - excluding 11134 files of 73,82GB), and it failed again, complaining about the exact number of files missing which it had just created before in the correct destination folder (221 files this time).
Repair has been hanging for a long time with no progress, so I expect this will also not terminate correctly.
On a sidenote: The name of my destination folder is “H(MZ)”, I wondered if the bracket character is the culprit, but one of my smaller tests ran OK with a similar destination folder: “T(TZ)”.
BTW I can’t do a test without exclude filters a sthis would exceed my available cloud space by far…
Along the way of my tests I ran into some curiosities, but I will report these in separate posts.


#8

No need to apologize for response times - we’ve all got real lives to attend to as well as our computer ones. :wink:

So all your small tests worked but the big job failed. I’m sure why that would be, there are jobs out there with more files and size than yours that are working just fine.

If you have the time, would be able to create a new job as if it were going to be your “big one” but give it filters so it’s only a small one then, if that runs OK, remove the filters once per run until you get errors? That might help us pin down where the problem is. Plus, you could put “the last” filter back in and the job should run again so you’ll at least be backing up some stuff while we figure out the rest.

It’s possible there’s something odd like a recursive folder somewhere that’s causing issues…