2.0.4.5 - The process cannot access the file because it is being used by another process

Even if I change the job to backup just a single file, like a word document or something, I still get the “locked file” error.

So this is not a locked file among the files to be backed up, which is more common to come across.

I can confirm that “System.IO.IOException: The process cannot access the file because it is being used by another process.” because the database file is locked by Duplicati itself can be solved by:

  • First run a repair job
  • Then run the backup job

If I run the backup job directly, the repair starts first in that job and then fails on the locked database file. Locked by Duplicati itself :slight_smile:

This problem started with the 2.0.4.5 beta. The error started popping up when restarting jobs I had stopped mid run. I think but I’m not 100% sure that the jobs I stopped mid run where jobs without a local database but with a complete set of backup files.

Log entry:

System.IO.IOException: The process cannot access the file because it is being used by another process.
   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.File.InternalMove(String sourceFileName, String destFileName, Boolean checkHost)
   at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter)
   at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify(BackendManager backend, String protectedfile)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass13_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)
1 Like

I am having the same issue. Computer restart does not fix it. Just updated to 2.0.4.5_beta_2018-11-28

“System.IO.IOException: The process cannot access the file because it is being used by another process.”

Found this from @magnust

Yeah, first I verified it wasn’t a file in my backup job that was locked by temporarily changing backup source to a single word file that for sure wasn’t locked. Still got the error.

But I fixed the problem by first running a repair job only. And when that was done I started the backup job. No errors for me after that.

There’s a problem with somewhat similar symptoms that was being chased in Error “database is locked” when trying to run a backup after a force stop #3445 for awhile, and was mentioned in Restore fails with “Failed to connect: The database file is locked database is locked” although there was concern over version difference @mikaelmello did a historical version test for the GitHub issue and thought that the issue arrived in 2.0.3.6. While I wouldn’t suggest doing this in anything but a test environment, testing the issue in 2.0.3.3 and 2.0.3.6 might give some clue as to whether these problems are related, or maybe someone else will offer their guess.

I’ve got a theory and related question…

Theory:
Multi-threading is somehow allowing the backup process to start BEFORE the test process has completed. This would explain why a test backup of ONLY a single non-Duplicati document causes the issue since as different threads, one would end up blocking the other as we are seeing.

Question:
Why is this not happening to “everybody”?

@magnust, is the issue still happening if you don’t run a repair job first?

If so, what happens if you:

  1. repair
  2. backup
  3. backup again

Does it still happen if you use any of these?

  • --no-backend-verification=true
  • --backup-test-samples=0
  • --snapshot-policy=on (or required)
  • --concurrency-max-threads=1
2 Likes

This sound backwards but: Sorry, the problem is gone. Hehe.

What I mean is sorry I can’t test and verify your theory, which by the way sounds very logical! Since running a combined repair+backup failed with the error. But first running just repair and when that was done starting a backup, all worked fine. And after that backups runs just fine.

1 Like

I had the same issue (details on my setup to follow) In my case simply using
--concurrency-max-threads=1 and running reapir before running the backup solved the issue for me.

Origional issue was I was getting the same error when tring to run backup or repair. I had reboot the computer to try to release the lock before making the change. no difference. I will note that in the backuo processes message at the top of the webpage we progressed to (Deleting unwanted files) before it would error out.

After changing the setting to 1 thread I tried to backup which had the same error (didn’t make sense to me) but then decided to do a repair (which succeeded) and then a backup (which succeeded)

2.0.4.5_beta_2018-11-28
Backup Destination SFTP

3 Likes

Thanks a lot for posting this! I’ve been postponing to fix this issue for months, and when I finally sat down to fix it, your post solved it in two clicks! Thanks a bunch!

I have Duplicati running on at least 30 computers. This problem has been continuously showing up on the computers regularly. I can confirm that “–concurrency-max-threads=1” does not stop it from happening. It has been a year!!! This bug really ruins the program for me.

1 Like

A year since a lot of questions were asked that weren’t answered, then after that reports that is was OK. Fixing bugs requires better information than that, ideally steps to reproduce bug so developers can look.

Please talk about yours starting at the top. Are you hitting stop, restarting jobs, and getting same stack? You might need to watch About → Show log → Live → Warning or set up a –log-file to catch the stack.

With 30 computers, are any less important such that you could dare run Canary, to see if it’s any better? Canary is always a bit unpredictable but at the moment is in pretty good shape trying to become a Beta.

Thanks for this thread. Just encountered this issue on one of my 4 backup jobs off the same system. Running version 2.0.5.1_beta_2020-01-18. Never encountered this issue until this version.

Regardless, running the repair job fixed it and the backup is currently running.

Edit: Looks like it crashed before it finished the backup.
Edit2: Back to failing like it did originally.
Edit3: Re-ran repair x2, then ran a successful backup. And successfully started a second. Fixed?
Edit4: Second finished successfully as well.

1 Like

Just ran into this myself been using 2.0.5.1_beta since it was released.

First error show this

Failed: Path cannot be null.
Parameter name: path
Details: System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at Duplicati.Library.Main.BackendManager.WaitForEmpty(LocalDatabase db, IDbTransaction transation)
   at Duplicati.Library.Main.Operation.CompactHandler.DoDelete(LocalDeleteDatabase db, BackendManager backend, IEnumerable`1 deleteableVolumes, IDbTransaction& transaction)
   at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

Log data:
2020-03-07 15:41:27 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at Duplicati.Library.Main.BackendManager.WaitForEmpty(LocalDatabase db, IDbTransaction transation)
   at Duplicati.Library.Main.Operation.CompactHandler.DoDelete(LocalDeleteDatabase db, BackendManager backend, IEnumerable`1 deleteableVolumes, IDbTransaction& transaction)
   at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

Any attempt to run again shows similar errors as above for process in use error.

Doesn’t actually record the log process when you click on show log under the backup name. It is sending out the email report however and red warning popup.

Everything worked yesterday with the job.

Edit: Repair and run again seems to have corrected it.

I think the job log isn’t saved until backup finishes, which means (unfortunately) that it’s not there to help when the backup has a problem before finishing. The email report seems to scrape some data out that’s nowhere else, but it’s still just bits and pieces. Ideally a problem that’s reproducible can get a –log-file at some tolerably wordy level, but it’s not something most people (or Duplicati by default) normally will run.

Backup failing: “Path cannot be null” and “TLS warning: SSL3 alert write: fatal: bad record mac”
was located by Google as something similar to your stack trace except it adds another one that possibly makes more sense for “Parameter name: path” and “Path cannot be null” error. A generous quote is like

2020-02-24 13:52:49 -04 - [Information-Duplicati.Library.Main.BackendManager-RenameRemoteTargetFile]: Renaming "duplicati-ia869d67054c24dd9a073856c903b2f94.dindex.zip.aes" to "duplicati-idc8b6aaaa58349af9bba024a83907605.dindex.zip.aes"
2020-02-24 13:52:59 -04 - [Retry-Duplicati.Library.Main.BackendManager-RetryPut]: Operation Put with file duplicati-idc8b6aaaa58349af9bba024a83907605.dindex.zip.aes attempt 5 of 5 failed with message: Path cannot be null.
Parameter name: path
System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
   at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
   at Duplicati.Library.Encryption.EncryptionBase.Encrypt(String inputfile, String outputfile)
   at Duplicati.Library.Main.BackendManager.FileEntryItem.Encrypt(IEncryption encryption, IBackendWriter stat)
   at Duplicati.Library.Main.BackendManager.DoPut(FileEntryItem item)
   at Duplicati.Library.Main.BackendManager.ThreadRun()
2020-02-24 13:52:59 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Failed: duplicati-idc8b6aaaa58349af9bba024a83907605.dindex.zip.aes ()
2020-02-24 13:52:59 -04 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at Duplicati.Library.Main.BackendManager.WaitForEmpty(LocalDatabase db, IDbTransaction transation)
   at Duplicati.Library.Main.Operation.CompactHandler.DoDelete(LocalDeleteDatabase db, BackendManager backend, IEnumerable`1 deleteableVolumes, IDbTransaction& transaction)
   at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

and I’m not familiar enough with the code to know exactly how these two stack trace might be related.

At a high level, this looks like the compact that runs after the backup and after deletions done by your retention policy had an issue. I just filed 403 error during compact forgot a dindex file deletion, getting Missing file error next run. #4129 (whose title might change because it’s now broadened in its scope).

You could consider filing yours as an issue so that it will be in the queue for the developers to look at. Whether or not it’s the cause of “The process cannot access the file” isn’t clear, but fixing that will be much more likely if it’s reproducible, and sometimes this means finding a lead-in Thanks for the data.

Thanks : “–concurrency-max-threads=1” and repair worked for me

–concurrency-max-threads=1 just makes it slower and does not solve the problem for me. I get this all the time on the computers. It is SO ANNOYING. Anytime there is any kind of glitch like internet down or server down or whatever. Sometimes it even happens with local servers. And once it happens… it often continues to block all backups until I repair the database. I have auto-repair database turned on but that does not always work. And then then if it fixes the problem… the error is there FOR EVER until someone runs the webpage and dismisses it. Even though it was an issue once weeks ago and many backups have succeeded since.

I can confirm that this bug is still occurring. My backup has successfully run 215 times, but it now returns “The process cannot access the file because it is being used by another process”. I assume the DB file is locked but I actually don’t know as no general log file is produced and the error message doesn’t return which file it is referring to. Using Process Explorer I can confirm that the SQLite database file is not open unless a backup is running and it is only opened by Duplicati.Server.exe when a backup runs.

A good start to investigating this issue would be to fix the general log file so that logs are produced in this circumstance and/or modify the error message so we know which file the error is referring to.

I’ll try setting concurrency-max-threads=1 and repairing the database, but that is a work-around it’s not a solution.

I’m going to guess and say that concurrency-max-threads=1 might only be working for some people as it only affects backup threads, it doesn’t affect the database which might also be running on its own thread. I’m guessing that concurrency-max-threads isn’t a setting that corrects the problem, rather it alters thread timing so that some don’t see the problem. The reason I’m saying this is that I’m guessing it’s a race condition.

I have two servers running Duplicati. The first, server A, has been running Duplicati for a little over a month without any issues. This server is running backups 4 times each weekday and it keeps backups for 2 weeks. The second, server B, has almost the same settings but it is failing with this error. The difference is, this server backs up 8 times every day and keeps backups for 4 weeks. Server B’s database is obviously larger with more backup information so is slower to access information, this hypothesis is further strengthened by knowing that server B is the less powerful of the two servers.

Anyway, it’s a guess, as I don’t have time to go through the code.

Setting concurrency-max-threads=1, restarting the Duplicati service, rebuild database, setting concurrency-max-threads=0, restarting the Duplicati service allowed me to run further backups. This is, of-course, a work-around, there’s still a bug lurking in there somewhere.

That work-around didn’t last long. I got one successful backup out of it but the issue is now back again. I think this reinforces that DB size/performance is slowing one thread down in a race condition and this thread that was previously winning the race is now coming in second place.