Problem after update to 2.1.0.2 (Beta)

I have two backups: one local and one in the cloud. After updating to Canary, the local backup still works, but the cloud backup doesn’t. I get this:

2024-12-24 00:12:27 +08 - [Error-Duplicati.Library.Main.AsyncDownloader-FailedToRetrieveFile]: Failed to retrieve file duplicati-befc3a22ff48342b9b62e2f5f58e9cf2f.dblock.zip.aes
IOException: The request could not be performed because of an I/O device error.

I attempted a repair, and got this:

The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again.

The operation Repair has failed with error: unknown error
No transaction is active on this connection => unknown error
No transaction is active on this connection

code = Unknown (-1), message = System.Data.SQLite.SQLiteException (0x80004005): unknown error
No transaction is active on this connection
at System.Data.SQLite.SQLiteTransaction.Commit()
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run(String path, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal(IFilter filter)
at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
at Duplicati.Library.Main.Controller.<>c__DisplayClass21_0.b__0(RepairResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String& paths, IFilter& filter, Action1 method) at Duplicati.Library.Main.Controller.RunAction[T](T result, IFilter& filter, Action1 method)
at Duplicati.Library.Main.Controller.Repair(IFilter filter)
at Duplicati.CommandLine.Commands.Repair(TextWriter outwriter, Action1 setup, List1 args, Dictionary2 options, IFilter filter) at Duplicati.CommandLine.Program.ParseCommandLine(TextWriter outwriter, Action1 setup, Boolean& verboseErrors, String args)
at Duplicati.CommandLine.Program.RunCommandLine(TextWriter outwriter, TextWriter errwriter, Action`1 setup, String args)
Return code: 100

Any idea what the problem is, and how to solve it?

Please be more specific, e.g. what Storage Type on Destination, and other relevant clues.

You can certainly also test your cloud by getting that URL from Export As Command-line.

BackendTool
BackendTester

1 Like

From what, and did it work before that? Also, what OS?

1 Like

Your cloud in 2022 used to be pCloud Drive with a USB cache drive that sometimes failed.

Google “The request could not be performed because of an I/O device error.” for its advice, supporting the idea that some hard drive or external storage device may be having trouble.

is a new addition (so maybe all the bugs are not worked out) you can try if you use pCloud.

1 Like

I use pCloud. No URL.

I tried another backup, and now Duplicati tells me: “Found 6889 remote files that are not recorded in local storage, please run repair”

Everything had been running smoothly for months, until I updated to Canary.

From what, and did it work before that? Also, what OS?

Windows 10. And yes, it worked before. I shouldn’t have updated. When it’s not broken… :ó(

Google “The request could not be performed because of an I/O device error.”

That’s an error I commonly got before (i.e., over the past months), but then I would try again and it would work.

Your cloud in 2022 used to be pCloud Drive with a USB cache drive that sometimes failed.

Thank you for digging into the issue. The problem at the time was with controlled folder access. It doesn’t seem to be the case this time.

Now when I try to repair the database, I get this:

The database was attempted repaired, but the repair did not complete. This database may be incomplete and the repair process is not allowed to alter remote files as that could result in data loss.

Please be specific. At Duplicati level, there is always a URL even for a Local folder or drive.

Import and export backup configurations shows how to Export As Command-line to run test.

Suggested tests that used the URL were named, but at least now it looks like list is going:

Did you delete the local database? If so, Repair button should be able to recreate it.
This does depend on pCloud Drive continuing to work, destination being intact, etc.
Testing the Destination in Duplicati would still be a good idea, especially given that:

Got before using what? You also wrote that:

so which was it, common error or smooth (or does smooth mean after a retry?
Possibly the common intermittent error has somehow turned into a solid error?

What Duplicati were you running before? The word “to” suggests it was a Beta.
2.0.8.1_beta_2024-05-07 is probably still in wide use. That was the older code.
2.1.0.2_beta_2024-11-29 is the much-changed one featured at download page.

Based on the “for months”, I guess you were on 2.0.8.1, as 2.1 is single month.

You can probably get Canary by changing Settings, but see the description of it.
Direct access from GitHub is also possible. Any reason why you took a Canary?
Sometimes needed fix goes there first, but any code change can also add bugs.
What you are seeing is not a bug I know, so probably best to test pCloud setup.

Both 2.1 Beta and Canary should have native pCloud support if you want to try.
If your pCloud Drive has died solidly, new native support will be bypassing that.

Downgrade from 2.1.0.2 to 2.0.8.1 (if you were on 2.0.8.1) gives a path for that.
There might also be some copies of old databases, depending on update done.

If pCloud Drive has long been trouble, would you prefer to use new native way?

It’s new, may have bugs, but at least its newness means someone’s ready to fix.
That what the Canary test releases are for, so maybe just go with path you took.
About → Changelog shows “Fixed supporting subfolders in pCloud” in 2.1.0.101
which means 2.1.0.2 doesn’t have that. Maybe go 2.1 Beta later, but not just yet.

1 Like

At Duplicati level, there is always a URL even for a Local folder or drive.

Is it the same thing as the folder path? If so, it’s P:\Crypto Folder\Backup\Duplicati\PA\

If I do an export, I get this:

{
“CreatedByVersion”: “2.1.0.2”,
“Schedule”: {
“ID”: 3,
“Tags”: [
“ID=10”
],
“Time”: “2024-12-31T15:55:00Z”,
“Repeat”: “1D”,
“LastRun”: “2024-12-31T08:00:31Z”,
“Rule”: “AllowedWeekDays=Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday”,
“AllowedDays”: [
“Monday”,
“Tuesday”,
“Wednesday”,
“Thursday”,
“Friday”,
“Saturday”,
“Sunday”
]
},
“Backup”: {
“ID”: “10”,
“Name”: “PA on pCloud”,
“Description”: “”,
“Tags”: ,
“TargetURL”: “file://P:\Crypto Folder\Backup\Duplicati\PA\”,
“DBPath”: “C:\Users\user\AppData\Local\Duplicati\NXJLOMBWSA.sqlite”,
“Sources”: [
“%DESKTOP%”,
“D:\PA\Programs\”,
“E:\Documents\”
],
“Settings”: [
{
“Filter”: “”,
“Name”: “encryption-module”,
“Value”: “aes”,
“Argument”: null
},
{
“Filter”: “”,
“Name”: “compression-module”,
“Value”: “zip”,
“Argument”: null
},
{
“Filter”: “”,
“Name”: “dblock-size”,
“Value”: “50mb”,
“Argument”: null
}
],
“Filters”: ,
“Metadata”: {
“LastBackupDate”: “20241223T095223Z”,
“BackupListCount”: “160”,
“TotalQuotaSpace”: “2199023255552”,
“FreeQuotaSpace”: “1353475182592”,
“AssignedQuotaSpace”: “-1”,
“TargetFilesSize”: “169621142958”,
“TargetFilesCount”: “6886”,
“TargetSizeString”: “157.97 GB”,
“SourceFilesSize”: “146793035736”,
“SourceFilesCount”: “176871”,
“SourceSizeString”: “136.71 GB”,
“LastBackupStarted”: “20241223T095223Z”,
“LastBackupFinished”: “20241223T101316Z”,
“LastBackupDuration”: “00:20:53.6027695”,
“LastCompactDuration”: “00:02:25.0945691”,
“LastCompactStarted”: “20240617T160743Z”,
“LastCompactFinished”: “20240617T161008Z”,
“LastErrorDate”: “20241231T121831Z”,
“LastErrorMessage”: “The database was attempted repaired, but the repair did not complete. This database may be incomplete and the repair process is not allowed to alter remote files as that could result in data loss.”,
“LastRestoreDuration”: “00:11:27.7808689”,
“LastRestoreStarted”: “20230317T072527Z”,
“LastRestoreFinished”: “20230317T073654Z”
},
“IsTemporary”: false
},
“DisplayNames”: {
“%DESKTOP%”: “Desktop”,
“D:\PA\Programs\”: “Programs”,
“E:\Documents\”: “Documents”
}
}

Did you delete the local database?

I tried “Recreate (delete and repair).” It failed. I can’t remember the message, but I’ll try again.

Done. I got a new kind of error message:

Error while running PA on pCloud
constraint failed UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID

Got before using what?

The I/O error is one I’ve had for months. It would try to backup, and immediately stop with the I/O error. But then if I tried again, it would work.

Possibly the common intermittent error has somehow turned into a solid error?

Possibly. The original problem, post-update, was that it couldn’t find a specific file. This isn’t something I had previously with the I/O error thingy, which happened right at the start of a backup and would have no lasting effect.

Based on the “for months”, I guess you were on 2.0.8.1,

Correct.

Any reason why you took a Canary?

Duplicati told me an update was ready. Wait. Maybe I didn’t get Canary, then. No, I didn’t, I got “Duplicati - 2.1.0.2_beta_2024-11-29.”

I tried editing the title of this thread accordingly, but the forum sends me back an error message: “An error occurred: You are not permitted to view the requested resource.” :o/

If pCloud Drive has long been trouble, would you prefer to use new native way?

That means making a fully new backup, right?

Please see the link you quoted for format options. The export will use file://, however direct use of a Windows-style folder path is understood for use, for example on the test tools I name.

The request was for an Export As Command-line, but yours will do (though it reveals extra).

The TargetURL is sort of what I wanted. For CLI use, be sure to double trailing backslash.

What was Duplicati doing then? Do you mean source or a destination file?

That’s less dramatic, and would have been offered by 2.0.8.1 Beta popup.
Beta update channel means it’s already had testing by the Canary testers.

If your old pCloud backup is in a compatible location, I’d think it just works.

You could sort of test around in an empty new backup Destination screen.
Just set up the Destination, see if Test connection button finds the old one.
If it says it cannot, so offers a new folder, maybe you can let it create that.
Don’t finish or save the new backup yet. This is just to line up folder uses.
Finding it however you find pCloud stuff will let you find where its files are.

Can you move files in pCloud GUI? I don’t use pCloud, so guessing some.
If you don’t know pCloud GUI either, you could practice first by looking for
Crypto Folder\Backup\Duplicati\PA however one navigates in the GUI
I would hope that the Path on server is similar, but with forward slashes.

is very close, but not exactly the same as

“Found 6889 remote files that are not recorded in local storage, please run repair”
Possibly it managed to get 3 more files uploaded. Can you sort your files by date?

I can try changing it later. If you really want, you can look up error at meta.discourse.org.
Duplicati team does not write their software, but does administer it.

1 Like

Pcloud and Duplicati is the forum topic linked by the pull request behind this 2.1.0.101 change:

Breaking change from previous canary

The pCloud backend is updated to support subfolders.
Paths with / will now be treated as a folder structure on pCloud.
Previous version would treat folder/subfolder as a single foldername,
this version treats it as folder / subfolder.

and since you found you were not actually on 2.1.0.102, this will impact your native API usage.
I would hope that you know how to move the files around in pCloud to satisfy both these cases.

Since you are not using the version with pCloud native support, this indicates that you are using the drive-sync feature of pCloud. This also explains the The request could not be performed because of an I/O device error. error message, which comes up when the request looks like a local file access to Duplicati, but is actually a network request that may fail.

I would recommend using latest canary and then switching to the pCloud native provider as the file-mapped sync solutions tends to have many quirks when used by an application (such as Duplicati) which assumes it behaves as a regular folder.

1 Like

What was Duplicati doing then? Do you mean source or a destination file?

I mean this:

The file does exist in the pCloud Duplicati folder.

Can you move files in pCloud GUI?

Yes. It looks like a regular folder in Windows Explorer.

Can you sort your files by date?

Yes. No file has been modified since December 24, date of the last successful update (the one before I updated Duplicati to 2.1.0.2).

and since you found you were not actually on 2.1.0.102, this will impact your native API usage.
I would hope that you know how to move the files around in pCloud to satisfy both these cases.

For months there was no problem, except for the occasional I/O error, which wasn’t much of an issue (it meant that one backup failed, yes, but the next one would work, so…).

I would recommend using latest canary and then switching to the pCloud native provider as the file-mapped sync solutions tends to have many quirks when used by an application (such as Duplicati) which assumes it behaves as a regular folder.

Is it easy to configure? I’m no longer computer savvy.

Do the dlist file dates there seem to match the file names? If so, what is the dblock’s date?

is the idea, but it may seem strange if it applied to your case of direct writes as if local drive.
I don’t know the design of pCloud Drive though, and I don’t have it, so no better answer yet.

Now that Duplicati has chosen to do native pCloud, there’s an obligation to understand that.
Your pCloud Drive glitches are kind of beyond what Duplicati support is prepared to handle.

Screenshot of configuration was above. I don’t use pCloud, but it looks potentially easy, but possibly needs move of files (which you make sound easy) if its storage location is different.

Either change the GUI Path on server to where it must be, or move the files as necessary. Probably first method is fast. I have no idea how slow a file move is, e.g. done with Explorer. Perhaps pCloud has no other browser GUI where at least move would be done at their end?

Sorry, I don’t understand. My knowledge is too poor. -_-"

I fear this file is no longer the problem, though. When I tried “Repair,” it seems I made the problem worse. Now the database is damaged, apparently (Duplicati tells me to try to repair it, but as mentioned in a previous post, it fails).

Right now I’m trying again to recreate the database.

It’s only in the current Canary release, though, correct? Maybe I should avoid Canary releases, considering how I manage to f* up backups even with regular releases. At least my local backup still works.

Oh, yes, right, sorry. Yes, I know where it is, then.

Since it seems I broke everything, I’ll probably start a backup from scratch, and hope I never need something I deleted/changed over the past years.

I think it is done at their end anyway. By which I mean that if I move a file via Explorer, their server moves the file, in fact, not my computer.

It is only in Canary. It was added in 2.1.0.100_canary_2024-11-25 but had a fix in 2.1.0.101_canary_2024-12-06. Current Canary is 2.1.0.104_canary_2024-12-31.

Canary are test releases where things can mess up, but usually by code not user.
Troubleshooting and repairs may then be needed, and can sometimes be difficult.

The counter argument is that Canary sometimes adds feature or fix that you want.

This sort of magic (Google Drive is similar but has two modes) can be hard to know.

Choice is yours anyway, so please keep posting details of problems if you want help.

1 Like

Will it also be in the next stable release of Duplicati, do you know?

Gotcha, thanks.

I tried reparing the database again and got this error message: “constraint failed UNIQUE constraint failed: DuplicateBlock.BlockID, DuplicateBlock.VolumeID”

Anyway, you did all you could to help me. Thank you (and kenkendk).

You likely meant the next Beta release. A plan for that and first Stable is here.
Plans may change, but I haven’t heard of any bug that would remove pCloud.
It’s not clear how many testers it’s had though. Fewer testers means less test.

Google search UNIQUE constraint failed: DuplicateBlock finds only this.
This table is usually empty so not interesting. I suspect a damaged dindex file.

If so, it would probably be easy to find by getting even an information level log.
After that, the challenge for experts getting file would be to solve its puzzles…

I am not such an expert, but while I was studying recreate in 2023, I wrote this:

public UpdateBlock(hash, size, volumeID, transaction)
	m_findHashBlockCommand SELECT Block table VolumeID given Hash, Size
	if block found in volumeID
		return false
	if block not found at all
		m_insertBlockCommand INSERT Block table Hash, Size, VolumeID
		return true
	else if block needs volumeID
		m_updateBlockVolumeCommand UPDATE Block table VolumeID given Hash, Size
		return true
	else block is already set up
		m_insertDuplicateBlockCommand INSERT DuplicateBlock table BlockID, VolumeID
			based on SELECT with given hash and size, plus a volumeid to record
		return false

Bottom INSERT DuplicateBlock table BlockID, VolumeID might have tried to run repeatedly.

Maybe we’ll have a comment on both of your questions from the lead developer sometime.

In terms of what to do, if you truly got backup damage (maybe helped by pCloud flakiness), reverting to 2.0.8.1 or waiting for next Beta likely won’t help recreate. Do you do that much?

Maybe your 2.0.8.1 database is still around and could be used. Its name would likely be like:

DBPath”: “C:\Users\user\AppData\Local\Duplicati\NXJLOMBWSA.sqlite

except likely with backup or .bak and date of 2.1.0.2 update in the file name.

Even if this can be made to work, it’s dangerous to live in a can’t-recreate-database situation.
Fresh start is probably advised at some time. If you need old data now, that’s a different task.

This sort of approach is always risky, as repeated errors can cause problems you don’t notice.

Above discussions are all quite speculative, as this is the first report of it and I’m not an expert.

1 Like

Right. -_-"

:+1:

I’d like to test it, but I’m afraid that the Canary release will break my local backup, which still works!

(It’s great there’s a native backup for pCloud, mind, even if still not fully tested.)

I don’t need the old data now, and hopefully never will. Better ready than sorry, though, so it was worth trying a repair.

Ah, crap. I thought it was OK since Duplicati didn’t find errors during the next backup attempt.

Compared to me, you certainly are! Again, thank you. Fresh backup it is. . . .

We can hope pCloud Drive cooperates. pCloud native is future. pCloud WebDAV has issues.

2 Likes

This one is a bit worrying. It looks like Duplicati is attempting to register the same hash twice as duplicate. Do you happen to have more logging on why/when this happens so I can reproduce it?

This was new in 2.1.0.2, I added an index to this one to prevent duplicates. There is one place left where I missed updating the code to ignore duplicates, and I think this is the place where the error happens.

1 Like

I guess the new issue for this is

where I possibly misunderstood the complete plan and suggested dindex duplicates are normal, however wouldn’t an observer want to know what volumes all the duplicate blocks are found in?

EDIT:

I’m confusing myself… Table would not run into trouble if it showed different volumes for blocks:

but if the same block appeared multiple times in the same volume (I have some), that may hurt it.