It looks like the metadata got lost for some reason, so Duplicati thinks all your files may be new/changed. It will reprocess those files but it won’t re-upload any data on the back end (except for actual new/changed blocks of files).
The “8 versions” are Duplicati backup snapshots. Every time you run a backup, a new version is created. Versions are retained according to your retention settings. If you go to restore a file, you can pick which of these versions to restore from. The B2 version retention is different. Keep it disabled (set to retain only latest version of a file) because Duplicati does its own version retention.
I can’t say at the moment why the metadata changes were detected but at this point I would let Duplicati finish. Did you do a database repair/recreate recently?
I have excluded the whole pcloudDrive directory from the backup (pcloud is a dropbox-like software which backs up itslelf). I hope Duplicati does not hiccup somewhere else and this backup will finish…
I note that the screenshots show that you have these on a schedule. How do you tell a restart from a backup? Note when a backup misses its schedule (for whatever reason) it runs when it’s able to run.
When a backup is having trouble, it’s a good idea to disable that schedule while you’re chasing issue. Sometimes it’s convenient to just uncheck some days-of-the-week, so it’s easier to check them later.
Regarding metadata, it’s calling them new files (I don’t know why), so it might be more than metadata.
How do you tell a restart from a backup? disable that schedule while you’re chasing issue
I don’t see a difference indeed, let’s disable my 2am daily backup for now.
Regarding metadata,
let me try to exclude my pcloud folders and run the backup again. So far Duplicatu ignores my newly added exception. I’ll post an update.
Did that come at the end (so it sounds)? It will probably stop your next backup. It does the same tests.
This button seems to always go to the per-job log which shows results, but an early end often goes to server log. You can look there in About → Show log to see if there’s anything interesting now or earlier.
Now that you have access to the job log again (it might have been busy if the job was doing its backup), please see if you can find anything from the earlier floods of warnings. Unfortunately, an error that says “Failed to process path” usually has detail lines below it that the one-line summaries omit. You can get warning details with About → Show log → Live → Warning or log-file=<path> log-file-log-level=warning, however you have probably missed them for past runs unless there’s something in the job or server log.
There are a couple of ways to infer what happened before. One can compare versions to see specifics, however a rough idea is in the summary at the upper right corner of a job log, assuming backup worked.
is my pretty boring example, but yours might be more interesting, e.g. if pcloudDrive is intermittent, you might see some quite large amounts of added and deleted files. When the files reappear, they’re “new”, however if there’s still a version of the file in the backups that you retain (the Versions number), the data blocks originally uploaded should be attached again now that the file is back, and not much uploading is required. If you like, you can view BackendStatistics in Complete log for BytesUploaded and more.
B2 doesn’t charge for uploads, but does for downloads, so we might soon be talking about how to solve current errors. I know there was misunderstanding, but it sounded almost like you wanted 1 file version, which would mean you should eventually change the Options screen 5 Backup retention, but means there’s more freedom to “solve” the problem by a restart, free upload, and set up logging for the future…
Other methods include trying to repair or recreate the database, but that would require downloading files.
If being without backups for awhile is a concern, you can keep the damaged backup while doing the new.
Damaged backups might be restorable with special tools like Duplicati.CommandLine.RecoveryTool.exe.
If upload time for new is a concern, we could attempt to delete your current version file versions while not losing the uploaded file blocks. This would let you get a new snapshot more quickly by avoiding uploading.
Given your wishes and priorities, what sounds interesting?
wow thanks for the detailed response, I read it three times but couldn’t understand what is my best option.
I don’t mind to live without a backup for a while, I can start a fresh upload if required.
Let’s go one by one please… all I want now is to get rid of Detected non-empty blocksets with no associated blocks! when I start a backup job. What should I do?