My backup started over again for no reason

My backup to B2 was completed a few days ago. All of a sudden, Duplicati started to backup all my files again.

This is my B2 bucket from the first backup. The size of the bucket looks correct, it hasn’t shrunk.

A few days ago a yellow warning message with ~3700 warnings popped up. I tried to check them but got this. I still can’t open the log as of now.

I can open ‘live log from the server’. The log is saying that my old videos and photos have magically changed its size.

My Duplicati backup is in progress again.

Also, notice 8 Versions of the backed file. I only expect one version because my B2 bucket keeps only the last version of any file.


  1. Why the backup restarted?
  2. What is 8 Versions of my files?
  3. Why my files “changed their size”? (They didn’t, actually)

It looks like the metadata got lost for some reason, so Duplicati thinks all your files may be new/changed. It will reprocess those files but it won’t re-upload any data on the back end (except for actual new/changed blocks of files).

The “8 versions” are Duplicati backup snapshots. Every time you run a backup, a new version is created. Versions are retained according to your retention settings. If you go to restore a file, you can pick which of these versions to restore from. The B2 version retention is different. Keep it disabled (set to retain only latest version of a file) because Duplicati does its own version retention.

I can’t say at the moment why the metadata changes were detected but at this point I would let Duplicati finish. Did you do a database repair/recreate recently?

metadata got lost for some reason
ok let’s observe for now, I’ll keep an eye on the metadata file

The “8 versions” are Duplicati backup snapshots.
ok sounds reasonable

let Duplicati finish
ok l agree, let’s see what happens.

Did you do a database repair/recreate recently?
No I didn’t touch anything. I was away from my PC for 4 days (travel).

An update
I got 300 warnings a second ago regarding files in my homedir. As you can see the whole backup has started over one more time.

The warnings point to pcloudDrive directory:

I have excluded the whole pcloudDrive directory from the backup (pcloud is a dropbox-like software which backs up itslelf). I hope Duplicati does not hiccup somewhere else and this backup will finish…


  1. Any idea why the backup restarted again?
  2. How can I monitor the metadata file health?

I note that the screenshots show that you have these on a schedule. How do you tell a restart from a backup? Note when a backup misses its schedule (for whatever reason) it runs when it’s able to run.


When a backup is having trouble, it’s a good idea to disable that schedule while you’re chasing issue. Sometimes it’s convenient to just uncheck some days-of-the-week, so it’s easier to check them later.

Regarding metadata, it’s calling them new files (I don’t know why), so it might be more than metadata.


How do you tell a restart from a backup?
disable that schedule while you’re chasing issue
I don’t see a difference indeed, let’s disable my 2am daily backup for now.

Regarding metadata,
let me try to exclude my pcloud folders and run the backup again. So far Duplicatu ignores my newly added exception. I’ll post an update.

ok my backup progress bar has reached the finish

I have got this error:

When I click ‘show’ I don’t see the error in the log, all I can see is old errors/warnings.

Do I need to worry?

Did that come at the end (so it sounds)? It will probably stop your next backup. It does the same tests.

This button seems to always go to the per-job log which shows results, but an early end often goes to server log. You can look there in About → Show log to see if there’s anything interesting now or earlier.

Now that you have access to the job log again (it might have been busy if the job was doing its backup), please see if you can find anything from the earlier floods of warnings. Unfortunately, an error that says “Failed to process path” usually has detail lines below it that the one-line summaries omit. You can get warning details with About → Show log → Live → Warning or log-file=<path> log-file-log-level=warning, however you have probably missed them for past runs unless there’s something in the job or server log.

There are a couple of ways to infer what happened before. One can compare versions to see specifics, however a rough idea is in the summary at the upper right corner of a job log, assuming backup worked.


is my pretty boring example, but yours might be more interesting, e.g. if pcloudDrive is intermittent, you might see some quite large amounts of added and deleted files. When the files reappear, they’re “new”, however if there’s still a version of the file in the backups that you retain (the Versions number), the data blocks originally uploaded should be attached again now that the file is back, and not much uploading is required. If you like, you can view BackendStatistics in Complete log for BytesUploaded and more.

B2 doesn’t charge for uploads, but does for downloads, so we might soon be talking about how to solve current errors. I know there was misunderstanding, but it sounded almost like you wanted 1 file version, which would mean you should eventually change the Options screen 5 Backup retention, but means there’s more freedom to “solve” the problem by a restart, free upload, and set up logging for the future…

Other methods include trying to repair or recreate the database, but that would require downloading files.

If being without backups for awhile is a concern, you can keep the damaged backup while doing the new.
Damaged backups might be restorable with special tools like Duplicati.CommandLine.RecoveryTool.exe.

If upload time for new is a concern, we could attempt to delete your current version file versions while not losing the uploaded file blocks. This would let you get a new snapshot more quickly by avoiding uploading.

Given your wishes and priorities, what sounds interesting?

wow thanks for the detailed response, I read it three times but couldn’t understand what is my best option.

I don’t mind to live without a backup for a while, I can start a fresh upload if required.

Let’s go one by one please… all I want now is to get rid of Detected non-empty blocksets with no associated blocks! when I start a backup job. What should I do?

anyways, [Recreate (delete and repair]) helped to get rid of non-empty blocksets