2.0.4.5 - The process cannot access the file because it is being used by another process

This sound backwards but: Sorry, the problem is gone. Hehe.

What I mean is sorry I can’t test and verify your theory, which by the way sounds very logical! Since running a combined repair+backup failed with the error. But first running just repair and when that was done starting a backup, all worked fine. And after that backups runs just fine.

I had the same issue (details on my setup to follow) In my case simply using
--concurrency-max-threads=1 and running reapir before running the backup solved the issue for me.

Origional issue was I was getting the same error when tring to run backup or repair. I had reboot the computer to try to release the lock before making the change. no difference. I will note that in the backuo processes message at the top of the webpage we progressed to (Deleting unwanted files) before it would error out.

After changing the setting to 1 thread I tried to backup which had the same error (didn’t make sense to me) but then decided to do a repair (which succeeded) and then a backup (which succeeded)

2.0.4.5_beta_2018-11-28
Backup Destination SFTP

3 Likes

Thanks a lot for posting this! I’ve been postponing to fix this issue for months, and when I finally sat down to fix it, your post solved it in two clicks! Thanks a bunch!

I have Duplicati running on at least 30 computers. This problem has been continuously showing up on the computers regularly. I can confirm that “–concurrency-max-threads=1” does not stop it from happening. It has been a year!!! This bug really ruins the program for me.

1 Like

A year since a lot of questions were asked that weren’t answered, then after that reports that is was OK. Fixing bugs requires better information than that, ideally steps to reproduce bug so developers can look.

Please talk about yours starting at the top. Are you hitting stop, restarting jobs, and getting same stack? You might need to watch About --> Show log --> Live --> Warning or set up a –log-file to catch the stack.

With 30 computers, are any less important such that you could dare run Canary, to see if it’s any better? Canary is always a bit unpredictable but at the moment is in pretty good shape trying to become a Beta.

Thanks for this thread. Just encountered this issue on one of my 4 backup jobs off the same system. Running version 2.0.5.1_beta_2020-01-18. Never encountered this issue until this version.

Regardless, running the repair job fixed it and the backup is currently running.

Edit: Looks like it crashed before it finished the backup.
Edit2: Back to failing like it did originally.
Edit3: Re-ran repair x2, then ran a successful backup. And successfully started a second. Fixed?
Edit4: Second finished successfully as well.

1 Like

Just ran into this myself been using 2.0.5.1_beta since it was released.

First error show this

Failed: Path cannot be null.
Parameter name: path
Details: System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at Duplicati.Library.Main.BackendManager.WaitForEmpty(LocalDatabase db, IDbTransaction transation)
   at Duplicati.Library.Main.Operation.CompactHandler.DoDelete(LocalDeleteDatabase db, BackendManager backend, IEnumerable`1 deleteableVolumes, IDbTransaction& transaction)
   at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   at Duplicati.Library.Main.Controller.<>c__DisplayClass14_0.<Backup>b__0(BackupResults result)
   at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)

Log data:
2020-03-07 15:41:27 -05 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at Duplicati.Library.Main.BackendManager.WaitForEmpty(LocalDatabase db, IDbTransaction transation)
   at Duplicati.Library.Main.Operation.CompactHandler.DoDelete(LocalDeleteDatabase db, BackendManager backend, IEnumerable`1 deleteableVolumes, IDbTransaction& transaction)
   at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

Any attempt to run again shows similar errors as above for process in use error.

Doesn’t actually record the log process when you click on show log under the backup name. It is sending out the email report however and red warning popup.

Everything worked yesterday with the job.

Edit: Repair and run again seems to have corrected it.

I think the job log isn’t saved until backup finishes, which means (unfortunately) that it’s not there to help when the backup has a problem before finishing. The email report seems to scrape some data out that’s nowhere else, but it’s still just bits and pieces. Ideally a problem that’s reproducible can get a –log-file at some tolerably wordy level, but it’s not something most people (or Duplicati by default) normally will run.

Backup failing: “Path cannot be null” and “TLS warning: SSL3 alert write: fatal: bad record mac”
was located by Google as something similar to your stack trace except it adds another one that possibly makes more sense for “Parameter name: path” and “Path cannot be null” error. A generous quote is like

2020-02-24 13:52:49 -04 - [Information-Duplicati.Library.Main.BackendManager-RenameRemoteTargetFile]: Renaming "duplicati-ia869d67054c24dd9a073856c903b2f94.dindex.zip.aes" to "duplicati-idc8b6aaaa58349af9bba024a83907605.dindex.zip.aes"
2020-02-24 13:52:59 -04 - [Retry-Duplicati.Library.Main.BackendManager-RetryPut]: Operation Put with file duplicati-idc8b6aaaa58349af9bba024a83907605.dindex.zip.aes attempt 5 of 5 failed with message: Path cannot be null.
Parameter name: path
System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
   at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
   at Duplicati.Library.Encryption.EncryptionBase.Encrypt(String inputfile, String outputfile)
   at Duplicati.Library.Main.BackendManager.FileEntryItem.Encrypt(IEncryption encryption, IBackendWriter stat)
   at Duplicati.Library.Main.BackendManager.DoPut(FileEntryItem item)
   at Duplicati.Library.Main.BackendManager.ThreadRun()
2020-02-24 13:52:59 -04 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: Put - Failed: duplicati-idc8b6aaaa58349af9bba024a83907605.dindex.zip.aes ()
2020-02-24 13:52:59 -04 - [Error-Duplicati.Library.Main.Operation.BackupHandler-FatalError]: Fatal error
System.ArgumentNullException: Path cannot be null.
Parameter name: path
   at Duplicati.Library.Main.BackendManager.WaitForEmpty(LocalDatabase db, IDbTransaction transation)
   at Duplicati.Library.Main.Operation.CompactHandler.DoDelete(LocalDeleteDatabase db, BackendManager backend, IEnumerable`1 deleteableVolumes, IDbTransaction& transaction)
   at Duplicati.Library.Main.Operation.CompactHandler.DoCompact(LocalDeleteDatabase db, Boolean hasVerifiedBackend, IDbTransaction& transaction, BackendManager sharedBackend)
   at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction& transaction, Boolean hasVerifiedBacked, Boolean forceCompact, BackendManager sharedManager)
   at Duplicati.Library.Main.Operation.BackupHandler.CompactIfRequired(BackendManager backend, Int64 lastVolumeSize)
   at Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__20.MoveNext()

and I’m not familiar enough with the code to know exactly how these two stack trace might be related.

At a high level, this looks like the compact that runs after the backup and after deletions done by your retention policy had an issue. I just filed 403 error during compact forgot a dindex file deletion, getting Missing file error next run. #4129 (whose title might change because it’s now broadened in its scope).

You could consider filing yours as an issue so that it will be in the queue for the developers to look at. Whether or not it’s the cause of “The process cannot access the file” isn’t clear, but fixing that will be much more likely if it’s reproducible, and sometimes this means finding a lead-in Thanks for the data.

Thanks : “–concurrency-max-threads=1” and repair worked for me

–concurrency-max-threads=1 just makes it slower and does not solve the problem for me. I get this all the time on the computers. It is SO ANNOYING. Anytime there is any kind of glitch like internet down or server down or whatever. Sometimes it even happens with local servers. And once it happens… it often continues to block all backups until I repair the database. I have auto-repair database turned on but that does not always work. And then then if it fixes the problem… the error is there FOR EVER until someone runs the webpage and dismisses it. Even though it was an issue once weeks ago and many backups have succeeded since.

I can confirm that this bug is still occurring. My backup has successfully run 215 times, but it now returns “The process cannot access the file because it is being used by another process”. I assume the DB file is locked but I actually don’t know as no general log file is produced and the error message doesn’t return which file it is referring to. Using Process Explorer I can confirm that the SQLite database file is not open unless a backup is running and it is only opened by Duplicati.Server.exe when a backup runs.

A good start to investigating this issue would be to fix the general log file so that logs are produced in this circumstance and/or modify the error message so we know which file the error is referring to.

I’ll try setting concurrency-max-threads=1 and repairing the database, but that is a work-around it’s not a solution.

I’m going to guess and say that concurrency-max-threads=1 might only be working for some people as it only affects backup threads, it doesn’t affect the database which might also be running on its own thread. I’m guessing that concurrency-max-threads isn’t a setting that corrects the problem, rather it alters thread timing so that some don’t see the problem. The reason I’m saying this is that I’m guessing it’s a race condition.

I have two servers running Duplicati. The first, server A, has been running Duplicati for a little over a month without any issues. This server is running backups 4 times each weekday and it keeps backups for 2 weeks. The second, server B, has almost the same settings but it is failing with this error. The difference is, this server backs up 8 times every day and keeps backups for 4 weeks. Server B’s database is obviously larger with more backup information so is slower to access information, this hypothesis is further strengthened by knowing that server B is the less powerful of the two servers.

Anyway, it’s a guess, as I don’t have time to go through the code.

Setting concurrency-max-threads=1, restarting the Duplicati service, rebuild database, setting concurrency-max-threads=0, restarting the Duplicati service allowed me to run further backups. This is, of-course, a work-around, there’s still a bug lurking in there somewhere.

That work-around didn’t last long. I got one successful backup out of it but the issue is now back again. I think this reinforces that DB size/performance is slowing one thread down in a race condition and this thread that was previously winning the race is now coming in second place.

If the stack trace details look similar to original post, there’s an update you might try:

One technical writeup theorizing about what’s going on is:

Database is locked when performing auto-cleanup #4631

If above map helps any, have at it. Duplicati is very much in need of people willing to help in any area, whether it’s helping on forum, test, documentation, or (all-important to forward progress) development.

Thank you ts678… when the database locks for me it occurs shortly after the “verify files” part of the backup, i.e. at the beginning so I don’t think it’s associated with auto-cleanup for me.

I again set max threads to 1 and managed to rebuild the database. I have changed database backup frequency to 6 hourly (i.e. 4 times a day) and only keep the last 10 days of backups. This should reduce the current database size significantly (and reduce it a little further over time as the 4 hourly backups are replaced with 6 hourly).

The backup looked like it was running well but just after “Deleting unwanted files” it reports “The remote server returned an error: (403) forbidden”… this is another issue and a topic for a different thread. Now I can’t get past the 403 error. I’m stuck, I’ll try again later.

If you mean after it starts that, that’s exactly when it may run – if option asks for that – does yours?

It’s happening after “Verify Files”, it sits on Verify Files for a while so I assume it has completed it. I don’t know what steps in the process display something in the notification bar but nothing else displays as far as I can see before the error occurs.

Just looking through RunAsync which calls PreBackupVerify… PreBackupVerify is called on line 466. On line 482 the progress bar is definitely updated to “Processing Files”. I suspect the code is having issues in UploadSyntheticFileList on 469 or RecreateMissingIndexFiles on 479. Both these areas appear to pass in a new instance of the database (which is suspicious given the error, but I’m not sure because “var” is used on line 431 so I don’t know what the datatype is - it’s a significant problem when using “var”). I haven’t delved into these two areas so I’m not sure what’s in them.

I have seen a potential issue… RunAsync is obviously called asynchronously. In RunAsync variables such as m_result, m_database, etc. are shared but they don’t have locks around them when their properties are assigned. If m_result (for example) is cached then it’s properties are also cached, writing to a property of m_result could write to a cached property and without a lock statement this won’t be flushed to memory immediately. Correct me if I’m wrong, but I would have thought that all these shared object property assignments should have locks around them.

One problem with using status to know where you are is that all you know is you’ve gotten as far as the status seen, but not as far as the next one, and some aren’t specifically seen because of other displays which fill the status bar – one can see phase at About → System info, e.g. for Backup_ProcessingFiles.

Stuck on “Waiting for upload to finish” gave an example of finding phases at the finishing end of backup.

One can sometimes also study log messages (which can be set quite high) to try to figure out locations. Ruling out auto-cleanup, though, may be as easy as your saying it is or is not in your Advanced options.

You would have to ask a C# developer, which I’m not, but sometimes I can follow Duplicati code around. There is unfortunately a huge need for Duplicati developers, as well as for people to help with anything.

Now that I’ve fixed the 403 (forbidden) error I was receiving (that we’ve been discussing in another thread) and reduced the size of my backup I’m not having this issue any longer. I’ll revert my settings to backing up often to try to cause the issue and if it occurs I’ll investigate from there.

At some point I’ll investigate where I think locking is missing… that issue seems to be quite common in the code.

1 Like