Abort due to constraint violation UNIQUE constraint failed:

This is running against Google Drive. It aborts with this error every time. I see older threads, but no solution. Suggestions?

Google Drive may have nothing to do with this. Looks like an issue with Duplicati’s sqlite DB code which I’ve noted they do not take into consideration certain problems. This seems to be another one in that bucket.

https://www.sqlite.org/conflict.html

“In most SQL databases, if you have a UNIQUE, NOT NULL, or CHECK constraint on a table and you try to do an UPDATE or INSERT that violates the constraint, the database will abort the operation in progress, back out any prior changes associated with the same UPDATE or INSERT statement, and return an error.”

In fact, Duplicati is probably not even dealing with this error properly at all and it just shoots back until something deals with it the best it can - if something didn’t then the application would crash. There’s various issues it was never coded for. There technically could be something else going on that causes the code to get this way in this instance.

For the sqlite error that I’ve seen, not backing up certain files worked past that one (this was my only alternative without spending many hours that I do not have for fixing Duplicati code). It might be applicable here.

Obviously, some issue is causing the DB code to get whacked out here, whether it be an update, Google Drive, sqlite DB code, something else. But, Duplicati should actually be dealing with it, which is the proper thing, and it isn’t, and then it should also not do what actually caused it.

So you have to find a workaround like a fresh new backup for example or something else if you can find something else.

What version of Duplicati? There have been fixes for some cases, and which you hit isn’t shown.

Don’t some of those older threads ask for logs? A UNIQUE constraint violation is way too generic.

Sometimes unexpected things happen, but end of path @Xavron noted usually logs something.
Default logging usually cuts off the details, so try About → Show log → Live → Warning and click.

When this started a few days ago, I was on 2.0.6.1_beta_2021-05-03. After the continual failures I upgraded to 2.0.6.3_beta_2021-06-17. It didn’t make a difference same failure.

Attached is the zipped live warning log.

log.zip (1.3 KB)

Thanks. That narrows it down some. I might as well post the stacks (adding newlines) for future reference:

Nov 12, 2021 4:27 AM: The operation Backup has failed with error: Abort due to constraint violation UNIQUE constraint failed: Remotevolume.Name, Remotevolume.State
{"ClassName":"Mono.Data.Sqlite.SqliteException","Message":"Abort due to constraint violation\r
UNIQUE constraint failed: Remotevolume.Name, Remotevolume.State","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"  at Mono.Data.Sqlite.SQLite3.Reset (Mono.Data.Sqlite.SqliteStatement stmt) [0x00084] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x0003d] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00104] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x0004e] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader..ctor(Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
  at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Duplicati.Library.Main.Database.LocalDatabase.UpdateRemoteVolume (System.String name, Duplicati.Library.Main.RemoteVolumeState state, System.Int64 size, System.String hash, System.Boolean suppressCleanup, System.TimeSpan deleteGraceTime, System.Data.IDbTransaction transaction) [0x00060] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x008ee] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x00000] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x0011d] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <34c05650075c4a0583e29e116862f000>:0 
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <87f95256fc6a4c3ea353de8d2aacf89b>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2147467259,"Source":"mscorlib"}
Nov 12, 2021 4:27 AM: Fatal error
{"ClassName":"Mono.Data.Sqlite.SqliteException","Message":"Abort due to constraint violation\r
UNIQUE constraint failed: Remotevolume.Name, Remotevolume.State","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"  at Mono.Data.Sqlite.SQLite3.Reset (Mono.Data.Sqlite.SqliteStatement stmt) [0x00084] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SQLite3.Step (Mono.Data.Sqlite.SqliteStatement stmt) [0x0003d] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SqliteDataReader.NextResult () [0x00104] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SqliteDataReader..ctor (Mono.Data.Sqlite.SqliteCommand cmd, System.Data.CommandBehavior behave) [0x0004e] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at (wrapper remoting-invoke-with-check) Mono.Data.Sqlite.SqliteDataReader..ctor(Mono.Data.Sqlite.SqliteCommand,System.Data.CommandBehavior)
  at Mono.Data.Sqlite.SqliteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x00006] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Mono.Data.Sqlite.SqliteCommand.ExecuteNonQuery () [0x00000] in <83c72f6e53eb49f28420feee73a4aa07>:0 
  at Duplicati.Library.Main.Database.LocalDatabase.UpdateRemoteVolume (System.String name, Duplicati.Library.Main.RemoteVolumeState state, System.Int64 size, System.String hash, System.Boolean suppressCleanup, System.TimeSpan deleteGraceTime, System.Data.IDbTransaction transaction) [0x00060] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x008ee] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.Collections.Generic.IEnumerable`1[T] protectedFiles) [0x00000] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x0011d] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.RunAsync (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x01048] in <34c05650075c4a0583e29e116862f000>:0 
  at CoCoL.ChannelExtensions.WaitForTaskOrThrow (System.Threading.Tasks.Task task) [0x00050] in <9a758ff4db6c48d6b3d4d0e5c2adf6d1>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter, System.Threading.CancellationToken token) [0x00009] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Controller+<>c__DisplayClass14_0.<Backup>b__0 (Duplicati.Library.Main.BackupResults result) [0x0004b] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0026f] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Library.Main.Controller.Backup (System.String[] inputsources, Duplicati.Library.Utility.IFilter filter) [0x00074] in <34c05650075c4a0583e29e116862f000>:0 
  at Duplicati.Server.Runner.Run (Duplicati.Server.Runner+IRunnerData data, System.Boolean fromQueue) [0x00349] in <87f95256fc6a4c3ea353de8d2aacf89b>:0 ","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2147467259,"Source":"mscorlib"}

Remotevolume.Name, Remotevolume.State suggest that problem is somehow there are multiple files with the same name on Google Drive. This is a situation almost nothing but Google Drive can achieve

I suppose you could go look for recent duplicates at drive.google.com or some tool that can sort by date.

Going down stack, UpdateRemoteVolume is what inadvertently caused a clash in RemoteListAnalysis below where it does UpdateRemoteVolume, most of which log. About → Show log → Live → Information might be a lightweight way to get a clue. Heavier logging (e.g. Profiling) would give a better clue though…

You could also check <job> → Show log → Remote and click on the latest list to see what files it saw, although this is probably harder to study than other ways, e.g. Google web UI or some third-party viewer.

You could get a database bug report and post a link for someone to look in DB, or you can look yourself.
It’s rather like a spreadsheet. The table you’d want is Remotevolume table. An example browser is here.

There was a 2020 report that the Repair button solved this, and a 2017 one that it didn’t. You could also blindly delete the latest version (which is always 0) to see if that will remove some newly added problem.

Although this issue seems very rare, it would be nice to understand its cause instead of just doing repair.
Chances are good that it involved some sort of glitch between Duplicati and Google Drive, and setting up logging to a file in case it happens again would be necessary. Does this happen much, or is it very rare?

I checked the remote logs. All the failed ones are the same. A list of block or index files going from the earliest at the end (in June) and the newest at the top of the list, a 11/8 files which was the date of last successful backup. I checked both the oldest and newest and the are both on google drive. This is a big backup with more than 10000 files.

I also looked at the verbose log, all the mentions of the UNIQUE constraint look like this, (same file each time):

2021-11-13 13:15:18 -05 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-20211109T130550Z.dlist.zip.aes
2021-11-13 13:15:18 -05 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-SchedulingMissingFileForDelete]: scheduling missing file for deletion, currently listed as Uploading: duplicati-20211109T130550Z.dlist.zip.aes
2021-11-13 13:15:18 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
Mono.Data.Sqlite.SqliteException (0x80004005): Abort due to constraint violation^M

That file doesn’t exist on the Google Drive.

Ideas?

I’m getting on thin ice because I don’t know the design that well, but I can attempt to interpret the code.

The program looks in its database and is updating file status to tidy up before the backup itself begins.

puts the first sighting of this file into a queue for deletion after all of the files have been looked through.

The second sighting of this file does:

which updates the file state to Deleting, which should not be an issue because first view is Temporary.

After all the remote volumes in the database have been adjusted, this code runs to cleanup the database:

Although I can’t find it, one theory is that the first sighting of the file had a state change try to Deleting somewhere on its way out, which would make new state clash with second sighting now at Deleting.

Same names at same state violate the UNIQUE constraint which says that the pairing must be unique.

This would possibly be easier to see in a profiling level log, but they get quite large if they run awhile.

Can you do some things below? duplicati-20211109T130550Z.dlist.zip.aes can be typed as a filter, and some number of rows might pop out of that, maybe two rows showing what messages are suggesting.

As a precaution, work on a copy of the Database rather than the original, if you decide to peek at tables:

After getting more sure of the problem, how to fix it is the next question. Do you ever Recreate database? Large backups can be very slow, and some exceptional cases can also download the entire backup area.

How big, and how large is the Options screen 5 Remote volume size? If at default 50 MB, 10,000 files is about 5,000 50 MB dblock files, so 250 GB, which should maybe have had above-default-100KB blocksize.

Or are you talking about 10000 source files, in which case look at your logs to see what the source size is.

If we go down the rebuild-DB path, copy off the old DB first, in case making a new one gets to be too slow. Manually editing the DB (by you) might also be possible but risky. First though, let’s look at the state of DB.

I really appreciate you thoughtful reply. Unfortunately it came in shortly after a fair amount of following inconclusive leads, and I had already done a repair. The repair failed in exactly the same way as a daily backup, and out of frustration I had gone ahead with the delete and recreate. I’m now two days into what will probably be a 4 day recreate. At the moment its “Processing indexlist volume 32161 of 57592”

Of course this precludes following up the investigation you suggested.