Unable to Reset and Start Backup Over Since Upgrade to v2.1.0.2

I have been a long term Duplicati user and have a large (2.5 TB) backup which is sent to three targets: two remote hosts over SSH and a local external HDD. Over the years I’ve had many issues with Duplicati in one way or another, the biggest and most frequent issue being that of corrupted local databases, which easily grow to 2-3 GB in size (even with a large block size). On some occasions I repair the local DB, which does work, but as you can imagine it is slow. Because I have three targets, when I hit issues I generally prefer to reset a backup rather than repair the DB. To do this I will:

  1. Access the backup files and delete all of them.

  2. Use the Duplicati GUI to go to the “Advanced”->“Database” page and choose to “Delete” the database.

  3. Start the backup manually, or wait for it to run automatically, and then Duplicati takes the combination of “no files at target” and “no database” to mean that it should start the backup fresh and off it goes. Great.

Last week I upgraded to v2.1.0.2 from whatever the previous beta version was (am not on canary) and started to run into some issues with one of my targets. Now I suspect that there is a faulty HDD at play, which is why I’m not too worried about the error messages (invalid HMAC, don’t trust content) and am not going to explore that issue at this time. What I want to do is reset the backup, but in the process I have found that the process above no longer works!

This is what I have done:

  1. Delete all of the files at the target.

  2. Delete the local DB.

  3. Try to run the backup. The software reports “No filelists found on the remote destination!” and for some reason still thinks it knows there are 63 versions in the backup. How can that be? The version information is supposed to be in the database, isn’t it?

To try and fix this I have attempted the following, checking the status after each step:

  1. Ensured not only the database file is deleted but any of the backup files.

  2. Restarted Duplicati.

  3. Exported the configuration, deleted the configuration (ensuring the tick boxes for deleting the files and the database are checked), and then imported the configuration fresh.

Yet despite doing all of that the issue persists. After exporting and importing the GUI no longer claims that any versions are available, but still fails to run a backup, reporting “No filelists found on the remote destination”.

Now what I don’t want to do is completely reset the software and lose my other two backups in the process.

I can post log files here but it seems to me that this might be a situation that has simply not come up during testing, especially since few people wish to willingly delete a backup and start over. For those reasons I doubt the log files will be useful, but let me know if you disagree.

For now I have just disabled this backup by turning off the schedule, but it’s annoying to be down a target. In all other regards, and despite the DB corruption issues, this is fantastic software and I am more than willing to persevere. Thank you to all the contributors.

I know you’ve (unfortunately) had a procedure that worked, but I’d like to hear the specifics.

Sounds like direct access outside of Duplicati. I think one time someone wanted an easy way, however this is so dangerous that it would need guarding. Also, it “shouldn’t” be needed a lot.

Yes, and still works fine for me (and I don’t think anyone else has reported), so no help there…

Presumably this has to be direct access.

How? Database screen or direct access? Database screen is probably more likely to aim right.

There’s more than one database. I think Duplicati-server.sqlite holds the home screen statistics.
People backing up the GUI job with Duplicati.CommandLine.exe complain about lack of update.

Import configuration also gets into this:

The option to “Import metadata” will create the new backup configuration and restore the statistics, including backup size, number of versions, etc. from the data in the file. If not checked, these will not be filled, and will be updated when the first backup is executed.

Where? If home screen, that’s explained. If elsewhere, e.g. job log, Restore, etc., very strange.

but above that is a different error if no files at all were found. This says it saw some Duplicati filenames, but no dlist files. Please check Destination files just before and just after the error.

Please also check Settings for Default options that may be adding unexpected extra options.

Is this TrayIcon, a Windows service, or something else? Please check Task Manager as well.
TrayIcon Quit could always vanish before work finished, and 2.1 can be slow even when idle.

You want to be very sure everything’s down and not (for example) still uploading backup files.

Presumably you took default, and didn’t check the Import metadata option for home screen.

Which one did you reset? You also mentioned a faulty HDD somewhere. Are you on that one?

Likely, but since I do lots of support and testing, I do that quite a lot, but to far smaller backups.
I’m certainly not going to advise testing if your other two do this too, but how about a new one?

See if you can get a very simple very small backup to show the problem. If so, check logs for it.
About → Show log → Live → Information is a start, Verbose is higher, and there are higher yet.
You can also set up a log file either for the TrayIcon (or whatever), or as options on the test job.

Might as well look for Job → Show log General and Remote, and About → Show log → Stored, however only the latter will survive the deletion of job database, as job logs are in job database.

ts678 thank you for the questions, I need a little time, but will reply with answers to all of them as soon as I can. Thank you.

This is the major issue to me. Duplicati only relies on the local database, so if you delete that, there is nothing that could make it “invent” 63 files. So somehow there is a database somewhere that has 63 files recorded.

In the UI, can you copy the database path and make sure it is actually deleted when pressing “Delete” ?
Alternative is to edit the path and save the changes, this will make it use the new database name.

My theory (still awaiting confirmation) is they saw home screen say 63 Versions.
That’s in the Duplicati-server.sqlite database, which didn’t get deleted with job DB.
Until I hear otherwise, I’m not worried about the 63. Reading the sequence below:

The run was almost certainly on home screen, so one might guess that the 63 versions was too.

Ok, we will need more information from @sgeklor , but yes, the number displayed is data recorded from the last run. It is not updated if you delete the database (or run other commands), but it has no effect on the actual operation.

The error message “No filelists found on the remote destination” is happening if you attempt to rebuild the database after you have deleted the remote files. It cannot build a database (and does not need to) if there are no filesets (.dlist files) on the remote destination.

Not exactly. As mentioned, a complete remote file deletion hits this earlier error in the checks:

Test result:

This is significant because it seems to be a narrow path that somehow gets other message.

Yes, but this is still only in recreate. The OP mentions it happening during backup?

Yes, good catch. This means that there are at least 1 file in remote storage that is matching the expected filename format, either a .dindex or .dblock. This will then pass the first check but since there are no .dlist files, it gives the “no filesets”.

Backup can get to Repair through --auto-cleanup in job or Settings default options.
Repair can get to Recreate through lack of database, and we heard delete was done.

I set up such a backup of a short file, deleted database and the single dlist, ran again:

image

About → Show log → Stored

Jan 16, 2025 8:31 AM: Failed while executing Backup "test 1" (id: 3)
Duplicati.Library.Interface.UserInformationException: No filelists found on the remote destination
   at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
   at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run(String path, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
   at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal(IFilter filter)
   at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(String backendurl, Options options, BackupResults result)
   at Duplicati.Library.Main.Operation.BackupHandler.RunAsync(String[] sources, IFilter filter)
   at Duplicati.Library.Utility.Utility.Await(Task task)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass22_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

Job → Show log → Remote

Jan 16, 2025 8:31 AM: list
[
{"Name":"duplicati-b00357546c7dc48f3ac58cf93dc786460.dblock.zip","LastAccess":"2025-01-16T08:30:29.2416476-05:00","LastModification":"2025-01-16T08:30:29.2416476-05:00","Size":769,"IsFolder":false},
{"Name":"duplicati-i5fb2c573b5d14944ad394661da0001ee.dindex.zip","LastAccess":"2025-01-16T08:30:29.4726604-05:00","LastModification":"2025-01-16T08:30:29.4726604-05:00","Size":624,"IsFolder":false}
]

I have no idea if this fits this case, but above are a couple of ways that one can get clues.

Hi everyone, I am sorry for not replying sooner or supplying the requested information. I was at work out in the field.

Upon returning to the office last week I saw that there is a new version 2.1.0.3. I applied that update and now the backup will start as expected.

So, in summary:

For my use case direct access to both the backup files and the database is a given. It’s my computer, if I want to delete those files then I will.

Resetting a backup is a necessary evil. For my backup size (1.6 TB give or take) the whole system seems to break down about once every year or so. It’s gotten better over the years. To reset the backup I wish to do two things: empty the target location of files and delete the database. There is no way to do this via the GUI so I have to improvise. I access the remote files and delete them directly. I then get the database name from the GUI and go to the Duplicati folder and delete the database along with any backups. Note that my database files are regularly 3-4 GB each are usually 2-3 backups present. So that’s > 10 GB per backup in the config, and since I run 3 configs that’s easily 30 GB of primary storage gone due to Duplicati databases. I then go back to the GUI and tell Duplicati to “Delete” the database. No idea what that does in addition to deleting the file (which in my case is already deleted) but it seems to be a necessary step. I then click start backup and it should run afresh.

I have no idea what went wrong on the previous version. But I can tell you that it is now working as expected.

I made some comments in my original post about an unreliable HDD. To be clear, this was a HDD being used as one of the backup destinations. No doubt the read errors would cause problems for Duplicati and this would result in database corruption. Again, it’s not Duplicati’s “fault” as such, but I do believe the program is quite brittle and would benefit from improvements in how it handles unexpected situations. For example, I still have “run before” scripts defined that perform a number of tests to verify that the destination exists and is present before starting Duplicati. Without those checks I would find that Duplicati would hose the database about once every 2-3 months. I wrote them some 4 years ago, so it could be that they aren’t needed now, but still. By the way, I am trying to back up a laptop that travels the world and regularly has intermittent internet access, as well as “transient” external drives. The software needs to be able to handle this. In particular, the reliability of the cancellation is abysmal - clicking “stop now” or “stop after current file” often does absolutely nothing.

Sorry, this was not supposed to turn into a rant, my apologies. Overall the software is great, and I acknowledge that I’m pushing it to the limit with the size and frequency of my backups.

I don’t see how this could cause database corruption?

If the remote storage is defective, Duplicati will refuse to use it.
This does not imply the database is corrupted, but is meant as a safeguard to avoid running backups only to discover later that they cannot be restored.

If the scripts are not sensitive, could you share them? This may help guessing what the problems you originally saw are.

Yes, this was severely broken, and I have fixed some of it in the canary builds.

This is not supposed to be the limits, it should just chug along, so any information about failures is appreciated, as it helps us get closer to fixing it.

Thanks for replying, am more than happy to help.

The first question about database corruption. I must admit I don’t have a positive correlation here. All I meant was that since I find Duplicati to be brittle, if there are any issues anywhere in the system, it will result in database “corruption”. Note that all I mean by “corruption” is that the software starts to complain about the local database and the only way to resolve is to attempt the repair (never works) or delete and rebuild. Because I have three targets, and the size of the backup is so large, what I will do is nuke a backup target (as explained in previous posts) rather than attempt a repair, it’s quicker - but I only do this when I have two other valid backups still working.

Yes, more than happy to share. These are my “run before” scripts for remote SFTP targets, and local external HDD targets. Note that I have a commented first line which allows me to easily disable a backup. I find this much easier than using the GUI to disable a backup, especially since when I untick the “run as scheduled” button that does disable the backup as desired, but when returning to the backup in the future when I tick it again I find I have to configure the schedule again. So for that reason, my own disable flag in the “run before” script is simply easier.

I should have said, this is running on Linux Mint 21.3.

Run before for remote host (checks remote host is up, and the local computer has internet access, neither of which can be assured):


#!/bin/bash

# Disable the backup by returning an error right away
#exit 1

# Can we ping the device?
ping -c 2 host.target.com > /dev/null && exit 0

# No dice!
exit 1

Run before for local external HDD (checks external HDD is present, not guaranteed if away from desk with a laptop):


#!/bin/bash

# Disable the backup by returning an error right away
#exit 1

# Does the target path exist?
[ ! -d "/media/user/drive/path/to/storage" ] && exit 1

# All good

Is great about the cancellation, well done.

And great to hear about my backup not being at the “limits”. I am basing that opinion on a lot of my earlier efforts some 5-6 years back. I was throwing a 2 TB backup at the software and with the default block size the local database was insanely large (20 GB or something). So I worked out early on that for a large backup you need to have a much larger block size, and then to tweak the volume size based on the reliability of the remote storage. Without that customisation the software just wouldn’t work. I hope these days it’s much better for users with regards to defaults. I also appreciate that the software can’t really tweak the blocks on the fly, and it won’t know the backup size until it does a first scan. There is no good solution here other than adding support for multiple block sizes into the database, and I appreciate that doing so is absolutely non-trivial. Sorry.

There have been cases where the database could be inconsistent, and there are quite a few checks that validates that the database is sane. Working off an inconsistent database will very likely result in broken (or incomplete) backups.

The only time it should “complain” is if there is a discrepancy between the list of remote files in the local database and the list returned from the server. In this case, Duplicati can “repair” by removing extra remote files or recreate missing remote files. The recreate of remote files only works if the needed data is still present on the system, which it may not be.

If you get one of these error messages, could you post it and maybe we can figure out what is triggering the discrepancy?

We will soon look into recreate database speed, but in our testing, it currently takes ~10 minutes to rebuild the database from a 100GiB backup. Are your recreate speeds much slower?

Those scripts seem very benign. I think one of the errors you would get would be something like “Missing 1234 files on remote storage, please run repair”.

This happens when Duplicati cannot find the files on the remote storage, because it is not mounted. This should be a transient error, so it should just continue next time if the storage is there.

The same should happen if the host cannot be reached.

I only see that these scripts would make the error message go away, but it should not affect stability from Duplicati.

And naturally, this will not work with repair, as it would require you have every piece of data in the entire backup set in your source.

Yes, we have both been able to speed it up quite a lot, but also increased the block size to 1MiB by default.

The database is actually prepared for this, so it stores both the hash and size of each block. At some point we might start using dynamic block sizes, so the size of blocks change according to the file size. But we have not prioritized this part yet.