“No filelist found on the remote destination”

Hey All,

I’m having an interesting issue. I have a fresh Debian 9 server with BackBlaze B2 as a backup destination. Things were going along smoothly until I needed to reboot the server for kernel updates. What happened next is that I tried to resume the backup and got the error "No filelist found on the remote destination”. Alright, did some reading and I attempted to do a database repair. That didn’t work and I still got the message. I did a database delete and then verify. It said that there were 3115 files on the destination and that I would need to run a repair to update the database. Cool, not a problem. Afterward it got to a point where I ended up with the same filelist error.

So far I’m really liking this setup. But I would really like to resume my backup and get past this error. Can anyone offer any advise?

Sorry for missing this earlier - are you still having the issue?

Do you recall what version of Duplicati you were using and whether or not the initial backup had completed?

Normally continuing a backup like that should be no problem but the “No filelists found” error means Duplicati couldn’t find any dlist files at the destination.

This could be due to many possible reasons including:

  • bad response from B2 when Duplicati asked for a file list

    [Can you check the job -> Show log -> Remote (tab) to see if any dlist files were listed? Note that if you rebuilt the database then it likely wiped that log]

  • bad parsing of response by Duplicati

  • files ACTUALLY missing (or maybe never created if initial backup incomplete?)

    [Can you find any duplicati-20171030T180000Z.dlist.7z.aes like files on B2?]

  • Duplicati looking in the wrong folder (well, different folder than it was using before)

Tell me if I understand correctly. Initial backup cannot be resumed? If it fails I get this and more errrors. WIth no fix.

I have similar issue and i am not able to recreate database.

error remains. It was my first backup and that got interrupted.

no way to recover ?

Hello @Rupesh and welcome to the forum!

Could you give more details please? How did the backup get interrupted, then was error on next backup?

Please mention any other steps that may have been taken. Ordinarily, just backing up should continue…

Does your remote destination have some dblock and dindex files? I’d expect no dlist, given the complaint.

Does “I am not able to recreate database” mean with the Recreate button? If so, how did it come to that?


I faced this same issue today.

Duplicati version:
OS : Ubuntu 18.04
remote service: B2 storage

Steps I did before this issue occurred.

  1. Setup and Initiated first backup.
  2. after ~3.2GB paused and then cancelled the backup.
  3. added one more filter to exclude a directory
  4. Tried to restart the backup, but failed.
  5. Tried to reset and recreate the local db
  6. Tried to delete the local db
  7. Tried to start the backup again and got this error.

I have to delete all the 3.2GB from B2 storage and delete and recreate the backup config to restart again.

Lots of “Tried” there. Any notes on which step got this failure, and optionally how the other steps failed?

EDIT: Although if I take it exactly as written, “this error” only happened at step 7, or did it happen earlier?


and I don’t see why this is necessary. Are you already running, or are you saying you think you need this?

There might also be a way to avoid re-upload, but it’s ugly and experimental. I just tested it earlier today…

@ts678 Thanks for the quick reply.

Tried to reset and recreate the local db

The recreate of local db got failed. But don’t remember the exact error message.

Tried to delete the local db

This was successful (because the delete button got disabled)

Tried to start the backup again and got this error.

This is the exact message I received at this point. “No filelist found on the remote destination”

Ya may be the delete and recreate backup config was unnecessary.

That makes sense because the filelist isn’t uploaded until the backup is done, as it contains the results of the backup. The file list information was in the database, except that got deleted in step 6, leaving no clue about what files got processed so far. What you might have is some of their blocks and dblock index files.

How the backup process works gets into this more, and B2 web UI can show what you actually uploaded.

Although B2 has free uploads so you might prefer taking advantage of that, what I noticed is that Recreate appears able to put dblock and dindex information into the database, even if it’s seemingly not in any files. What this would achieve is upload avoidance of the same information when the same files are backed up again in another backup try. Duplicati won’t upload blocks of data that it records as already on the remote.

The trick is to trick Recreate into pulling the dblock and dindex info in (it may need complete download) by manual upload of a dummy dlist taken from some other backup done specifically to make this starter dlist. You don’t care that Recreate doesn’t like it, provided it runs, and then lets you continue backup as desired.

If you like, I can try to test this out a little more and supply steps, but B2 download charges might kick in…

Things like pause, cancel, and stop have some issues. I don’t recall all the specifics, but a repair effort is:

Fix pausing and stopping after upload #3712

It does simplify things a bit because it gives you the option of deleting the remote files. On the other hand it complicates things because you have to take it up on its offer to export the job, then import it back in again. Probably the best path though unless you’re in the ill-advised habit of deleting the backup parts by hand. :wink:

The easy way to avoid no-filelists-at-all is to start with a small backup, perhaps doing most important files. After getting at least one backup up, you won’t be subject to Recreate complaining it can’t find any filelists.

Hi, I had a similar problem. All of my backups to AWS S3 were working fine except for one that kept failing with "No filelist found on the remote destination”. Here is what I think happened and how I managed to resolve the problem:

  1. I started the backup. After it finished, the *.dlist.zip.aes file was missing at the destination. I don’t remember how I got into this state, but it is possible that I tried to pause or stop the backup at some point and restarted it later.
  2. Based on advice in this thread, I created a dummy backup with a single file, and after it finished, I copied the dlist file from the dummy backup to the failing backup.
  3. I tried repair on the backup database. This didn’t help, I got the “Detected non-empty blocksets with no associated blocks” error.
  4. Recreated the database.
  5. Ran the backup. It got stuck at the end - the progress bar was showing two files remaining, Duplicati was consuming one CPU core, but I stopped the backup after three days of no visible progress.
  6. Ran the backup again. This time it suceeded, yay :).

OS: Windows 7
Duplicati version: initial backup created with, finished with

I had the same problem… large initial backup kept failing then could not rebuild. Following previous remarks, I deleted the database from the web interface and changed the destination folder on google drive to a new folder. Then only selected a small number of files to get a fast and complete first backup. This adds the required dlist file to google drive and should facilitate rebuilds in the future. After the first backup edit the config and add in all remaining files for a full backup.

1 Like

Same issue here. Duplicati on Proxmox 7.1 (Debian Bullseye). Remote destination is a Google Drive Teamdrive

I followed the steps, but on 3. I still get No filelists found on the remote destination even though there now is a duplicati-20211124T232904Z.dlist.zip.aes between all the dindex and dblock files. Recreating the Database throws the same error

is probably the problem. Duplicati OAuth Handler can get you a new AuthID that will let Duplicati see your copied file, which I assume was copied by you, not Duplicati. Click on the file and look at its Created by.


which means that the creator can see it even with a (safe) limited login. Google plans to kill off full access.
Probably safest to use it only until you get your database repair done, then switch back to the limited login.
If Duplicati then complains the file has vanished, delete it manually, and a Duplicati Repair should replace.

Alternatively, you can download your dummy dlist and upload it using a Duplicati tool and the usual AuthID.
Duplicati.CommandLine.BackendTool.exe can put file to target folder URL from Export As Command-line.

Thanks for the tipps. The button for the full access isn’t available for me, but after some troubles I managed to put the file on the drive using the commandline tool.

After that Dupolicati told me to run list-broken-files and purge-broken-files which I did and now the upload is running again.

However, the problem is that the backup seems to have started from the beginning anyway. Diplicati says it has 167GB to upload which is the size of the whole directory to backup. So the whole operation does not seem to have any advantage over deleting the backup and starting a new one

Edit: There are still 63GB on the drive, so it’s not like purge-broken-files just deleted everything

You followed the Duplicati OAuth Handler link abpve? Did you see a limited login? I don’t think it’s browser-dependent, but I just tried three and they all show full access. What are you seeing, and in what browser?

What (if anything) happened in the gap above, leading up to messages? Did you get a database created?

Are you talking about the files and sizes “to go” status ? That’s not an upload value. It’s a processing value. The idea is to make use of the previously uploaded blocks by making Duplicati aware of destination blocks by recreating its database from information (dindex files) that track what blocks are already at destination.

Duplicati still has to process all the source files to see if their its blocks are already backed up. If they’re in known dblock files, then Duplicati just references those, and doesn’t need to upload the same data again.

The other reason Duplicati can’t tell you that it has 167 GB to upload is because it can’t see that far ahead. Default remote volume size is 50 MB, and default is that it can queue 4. When queue is full, it stops further preparation of upload volumes, so source files aren’t examined to know which blocks need to be uploaded.

Are you truly getting uploads? You can watch About → Show log → Live → Information, or see destination.

Sorry, my bad. I used the link from the from the Duplicati webinterface. I got the button on your link.

I’m not exactly sure if I clicked repair database or recreate database, I believe it was recreate. But either way after clicking that I got the message about listing and purging broken files.


That sounds great. Also I just happend to see that the “to go” filesize dropped by 50MB a couple of times as if duplicati skipped those volumes

so I just checked the finished backup and the size of the files on google drive are actually what it is supposed to be. So Duplicati indeed skipped the already uploaded volumes

1 Like

Sorting the Google Drive files by their Last modified date might be interesting, if you care to see how far it got on the first interrupted backup and how much it had left. I’ll assume it finished this time and did a dlist.

This may be totally unrelated, but I also ran into the “No filelists found on the remote destination” red error. I don’t care about it, but thought it might help so I tested around for quite a bit to see if I could reproduce it. It turned out to be beyond me to reproduce reliably.

I was playing around with a test-backup. I already had some zip.aes on the remote storage. Apparently adding ‘compression-module=7z’ and some new source-files or source-sets caused the issue to occur sometimes. Maybe only if I killed the duplicati during backup.

This is what sometimes reproduce the issue

  • create a small regular zip backup, and run a backup so we create a few .dblock.zip.aes on the backend
  • change compression-module to 7z.
  • Add another source-set, one that takes some time to finish
  • Run it, and while its running kill duplicati (I restart the docker)
  • Run again, now error.

I have reproduced it 2 or 3 times, but not reliably. I was ‘sure’ I had nailed it in a last attempt. But instead of the error I actually got some 7z.aes files on the backend, that was a first for me.

Once or twice the problem appeared after hard-restarting docker while it was running.

But all in all, it was quite clear it was only a problem while i had 7z enabled.

I found this in the log.

2022-01-05 15:05:33 +01 - [Warning-Duplicati.Library.Main.Controller-UnsupportedOption]: The supplied option --zip-compression-zip64 is not supported and will be ignored
2022-01-05 15:05:33 +01 - [Warning-Duplicati.Library.Main.Controller-7zModuleHasIssues]: The 7z compression module has known issues and should only be used for experimental purposes
2022-01-05 15:07:59 +01 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.IO.FileNotFoundException: The given file is not part of this archive
File name: 'fileset'
  at Duplicati.Library.Compression.SevenZipCompression.OpenRead (System.String file) [0x00063] in <f2c90e934a6a4aeaa4ff8ddaa332777d>:0 
  at Duplicati.Library.Main.Volumes.VolumeReaderBase.ReadFileset () [0x00000] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Volumes.VolumeReaderBase..ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x0001b] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Volumes.BlockVolumeReader..ctor (System.String compressor, System.String file, Duplicati.Library.Main.Options options) [0x00010] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Operation.Backup.SpillCollectorProcess+<>c__DisplayClass0_0.<Run>b__0 (<>f__AnonymousType11`2[<Input>j__TPar,<Output>j__TPar] self) [0x0023c] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at CoCoL.AutomationExtensions.RunTask[T] (T channels, System.Func`2[T,TResult] method, System.Boolean catchRetiredExceptions) [0x000d5] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunMainOperation (System.Collections.Generic.IEnumerable`1[T] sources, Duplicati.Library.Snapshots.ISnapshotService snapshot, Duplicati.Library.Snapshots.UsnJournalService journalService, Duplicati.Library.Main.Operation.Backup.BackupDatabase database, Duplicati.Library.Main.Operation.Backup.BackupStatsCollector stats, Duplicati.Library.Main.Options options, Duplicati.Library.Utility.IFilter sourcefilter, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.BackupResults result, Duplicati.Library.Main.Operation.Common.ITaskReader taskreader, System.Int64 filesetid, System.Int64 lastfilesetid, System.Threading.CancellationToken token) [0x0035f] in <e60bc008dd1b454d861cfacbdd3760b9>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00a1c] in <e60bc008dd1b454d861cfacbdd3760b9>:0