I think I was running 2.0.8.1 beta. In any case, after I upgraded to 2.1.0.4_stable I’m receiving the following messages:
The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.
If I attempt a repair of the database, I get the following:
No files were found at the remote location, perhaps the target url is incorrect?
I am using AWS S3. I can browse and see all the files through Amazon’s website. The “Test Connection” button also works perfectly. I tried to downgrade but that didn’t solve any problems either. I cannot restore files either.
What was the result? Same? Different? If you didn’t downgrade per directions, likely different, probably complaining about database versions because 2.0.8.1 doesn’t know future versions.
It’s usually just an authentication and test that file listing is possible. You could try a better test:
Look in Job → Show log → Remote. If you see a list, click it. If nothing there, try inducing by Verify files button. If it’s not too offended by the database, it should list and compare to DB, however one big mystery is why it can’t see files any longer, at least according to Repair result.
The “target url” is a way of expressing the Destination setup. “Export As Command-line” can get one properly quoted to use at command prompt with BackendTool if you want to list that way.
After what? Did you upgrade then backup? Did something say to run repair? Something ran that. Problems arose. If you saw them, what were they? About → Show log → Stored may have clues from the server database. If you actually deleted the local job database, logs from there are gone.
If the job config somehow changed, but you know your S3 setup you can try Direct restore from backup files to see how far that gets. If it can even show you restore dates, it’s from the S3 files.
No I didn’t follow those directions, I didn’t know they existed. I also thought that there was a possibility the DB didn’t get updated in the first place. But it was the same result.
I received that message after I tried to backup after I upgraded. I’ve now gone back to 2.1.0.4. When I run the backup I get:
The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.
So then when I run repair, I get:
No files were found at the remote location, perhaps the target url is incorrect?
If I go to: About → Show log → Stored I get for the repair:
Repair log
Duplicati.Library.Interface.UserInformationException: No files were found at the remote location, perhaps the target url is incorrect?
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run(String path, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal(IFilter filter)
at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
at Duplicati.Library.Main.Controller.<>c__DisplayClass21_0.<Repair>b__0(RepairResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
at Duplicati.Library.Main.Controller.RunAction[T](T result, IFilter& filter, Action`1 method)
at Duplicati.Library.Main.Controller.Repair(IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)
For the previous backup attempt I get:
Backup log
Duplicati.Library.Interface.UserInformationException: The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.
at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(String backendurl, Options options, BackupResults result)
at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IFilter filter, CancellationToken token)
at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter, CancellationToken token)
at Duplicati.Library.Main.Controller.<>c__DisplayClass17_0.<Backup>b__0(BackupResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)
I do see a list, but it is empty. If I do verify files, the list is still empty.
Then I get this:
Direct restore log
Duplicati.Library.Interface.UserInformationException: No filesets found on remote target
at Duplicati.Library.Main.Operation.ListFilesHandler.Run(IEnumerable`1 filterstrings, IFilter compositefilter)
at Duplicati.Library.Main.Controller.<>c__DisplayClass24_0.<List>b__0(ListResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
at Duplicati.Library.Main.Controller.RunAction[T](T result, IFilter& filter, Action`1 method)
at Duplicati.Library.Main.Controller.List(IEnumerable`1 filterstrings, IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)
So I ran, based off the URI exported from the command line configuration, and I get:
.\Duplicati.CommandLine.BackendTool.exe LIST "s3s://[redact]-duplicati//t/?s3-location-constraint=us-west-2&s3-storage-class=STANDARD&s3-client=aws&auth-username=[redact]&auth-password=[redact]"
That is super odd… The LIST call here calls the exact same method as Duplicati does when listing files. This includes the parsing of the URL, so if the URL you have on the commandline is the same as is reported in the UI (use copy-to-clipboard), then I don’t have a great guess.
Are you perhaps using a different prefix?
Usually, all Duplicati generated files will start with duplicati-, and if they do not, they are ignored.
Looking at the URL, I would think it works just fine, but I see a double // that could potentially mess something up. It should be sufficient to use:
My best guess is that this is somehow related to the path being interpreted slightly different.
The code that parses the URLs has not been changed in between the two versions though.
would prevent what I assume happened in job creation here. This has caused some problems.
A problem with having people remove it on an existing backup is the destination might not find previously uploaded files that did have leading slash on object’s key, if it’s considered different.
Yes, I think since Duplicati created the mess, we should trim leading slashes in the backend, so we support having either version in the filenames, and create a warning if we need to trim the source url’s leading slashes.
But I don’t understand how it can work and then stop working?
Upgrading AWS SDK to latest #5654
went from 3.3.110.7 to 3.7.405.9 so maybe received the 3.7.205.20 breaking change.
I’m not sure if Duplicati’s library code usage falls into what got changed there though.
Additionally, it’s also conceivable that the slash change is broader than got described.
Due to same DB versions, easy first test if you have AWS S3 is 2.0.9.109 to 2.0.9.110.
2.0.8.1 is actually on 3.3.104.43, but DB version change makes flips less convenient…
Some years ago, I had found a tool (CLI I think, but I forget which one) that could show
the keys without the slash interpretations that too many tools and web GUIs tend to do.
EDIT 2:
Another reservation about this idea is that
seems to be saying that 2.1.0.4 CLI withstands leading slash, but direct restore doesn’t.
That was more for the developers to attempt a repro. I already heard that you have AWS S3.
Getting a bit ahead of things, if you really want to downgrade, feel free – and this could firmly disprove this particular S3 library theory, however a less potentially disruptive/risky approach compared to running a Canary in production (if it bothers you) is to install from .zip to a folder somewhere, stop your regular Duplicati, and start a TrayIcon from folder for a Direct restore from backup files where you type in the S3 settings by hand. If that fails, theory is invalid…
Obviously something changed somewhere since 2.0.8.1, and we’re kind of fishing for the spot.
describes current state of confusion, IMO. We also aren’t quite sure what AWS has, in raw form. Looking again for an ideal tool didn’t find anything, but above post got somewhere with aws CLI. aws s3 ls has a --recursive option, and I wonder if it tries to find pseudo-folders to list, or just shows the entire bucket which would be far easier. A next question is can it show leading slash?
Great find! I guess that could explain it. There is a small bug-fix handler in the S3 backend that deals with cases where filenames start with /, and a reference to Duplicati 1.x.
But in this case, it does not look like the filenames start with /, but rather the correct duplicati- prefix.
In any case, I have made a PR that removes leading slashes from both the prefix and the filenames, if any should be present:
Reading through the issue again, I am not sure this will fix the issue though.
@zhackwyatt can you find one of the files on S3 and find the entire object key and post it here? Preferably a screenshot if it does not reveal too much.
It should be something that starts with t/duplicati-. I am fishing for an explanation that could tell me why Duplicati says there are no files.
Alternatively, could you post one or two of the lines that are returned when you use BackendTool.exe LIST, so I can see if there is anything that could explain this?
The LIST command returns the results directly, but Duplicati will parse the filenames and omit anything that it does not like, and the results here suggests that something makes it reject the filenames.
I was only trying to figure out why downgrading is difficult because we are trying to recreate my database. So I figured the database version was unimportant. But I don’t know all the details of how duplicati works, so maybe what I just said doesn’t make sense.
Not downgrading the server database means the server will find the wrong version, so not start. Messages on this exist, but are hard to find. Here’s the start of what Duplicati-crashlog.txt says:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Exception: A serious error occurred in Duplicati: System.Exception: Failed to create, open or upgrade the database.
Error message:
The database has version 8 but the largest supported version is 6.
This is likely caused by upgrading to a newer version and then downgrading.
The old version of 6 is what the directions say to go to. The new value of 8 is in release note as:
Old versions of Duplicati are unable to read database formats that did not exist at their time, so downgrading Duplicati means changing both the format and the version number back to the old.
EDIT:
I don’t know if that answered your question. I view downgrading as probably difficult for a typical person. Technical abilities vary. There’s no custom Duplicati downgrading tool, only a procedure.
I still do not understand how this message can be there, when the LIST command returns the files. Based on the other comments, my guess would be that the url is somehow “sanitized” so the double slash is removed?
Edit: That did it, I was able to reproduce!
The problem here is the SecretManager that converts the string to an Uri and then back to a string. This conversion would loose the extra leading slash.
Since the secret manager is not part of the BackendTool the problem is not there, only when running the backup. I have made a fix here.