Is there a way to set a maximum time for the backup job?

Hello people!

I’m new here and haven’t found a topic that solves my doubt, not even the manual.
If by chance I’m writing in the wrong place, I’m sorry.

I am looking for a way to set a maximum time for the backup job. Does anyone know if there is a function that limits the maximum time for executing a task?

It happens that at times, the backup job does not complete. The backup process crashes and does not perform the next tasks. If I manage to limit the execution time, my problem will be solved!

Thank you.

No, there isn’t a built in mechanism to stop a backup prematurely at a set time.

I think we should try to find out why your backup process crashes. How do you know it crashes? What error messages do you see?

Thanks for answering.
The process just doesn’t finish, it runs without using machine or network processing. As the process does not complete, it does not generate any log errors.
The last time the problem occurred, the process was running for two days without being completed. I finished the Duplicati application manually and reopened, everything went back to working normally.
So I think that if there was a way to finish the job after X hours of running, it would be good. Flag the backup as “error” and move on to the next task…

I still think the right option is to fix the root problem.

Next time the backup appears to be hung, go to About -> Show Log -> Live Log -> and set the dropdown to Verbose. See if there is any activity there. If not, show us what the latest lines are.

Got the problem!
It appears to be in connection with Dropbox (OAuth). The system doesn’t keep trying …
Follow LOG:

"
Oct 22, 2020 1:34 PM: Backend event: Put - Retrying: duplicati-bbc45b87d06274ba6b2155ebdf68445b0.dblock.zip.aes (499,91 MB)
Oct 22, 2020 1:34 PM: Operation Put with file duplicati-bbc45b87d06274ba6b2155ebdf68445b0.dblock.zip.aes attempt 4 of 5 failed with message: O serviço OAuth atualmente está sobrecarregado, tente novamente em algumas horas
Oct 22, 2020 1:34 PM: Backend event: Put - Failed: duplicati-b63802a25ff044fdd9054f2da1542af18.dblock.zip.aes (499,95 MB)
Oct 22, 2020 1:34 PM: Operation Put with file duplicati-b63802a25ff044fdd9054f2da1542af18.dblock.zip.aes attempt 6 of 5 failed with message: O serviço OAuth atualmente está sobrecarregado, tente novamente em algumas horas
Oct 22, 2020 1:30 PM: Backend event: Put - Started: duplicati-b63802a25ff044fdd9054f2da1542af18.dblock.zip.aes (499,95 MB)
Oct 22, 2020 1:30 PM: Renaming “duplicati-b8238dda017724cd6a1bbd66a09f2a02c.dblock.zip.aes” to “duplicati-b63802a25ff044fdd9054f2da1542af18.dblock.zip.aes”
Oct 22, 2020 1:30 PM: Backend event: Put - Rename: duplicati-b63802a25ff044fdd9054f2da1542af18.dblock.zip.aes (499,95 MB)
Oct 22, 2020 1:30 PM: Backend event: Put - Rename: duplicati-b8238dda017724cd6a1bbd66a09f2a02c.dblock.zip.aes (499,95 MB)
Oct 22, 2020 1:30 PM: Backend event: Put - Retrying: duplicati-b8238dda017724cd6a1bbd66a09f2a02c.dblock.zip.aes (499,95 MB)
Oct 22, 2020 1:30 PM: Operation Put with file duplicati-b8238dda017724cd6a1bbd66a09f2a02c.dblock.zip.aes attempt 5 of 5 failed with message: O serviço OAuth atualmente está sobrecarregado, tente novamente em algumas horas
Oct 22, 2020 1:29 PM: Backend event: Put - Started: duplicati-bbc45b87d06274ba6b2155ebdf68445b0.dblock.zip.aes (499,91 MB)
Oct 22, 2020 1:29 PM: Renaming “duplicati-ba1f61ca58d3d4c6d9127ff72467b7e53.dblock.zip.aes” to “duplicati-bbc45b87d06274ba6b2155ebdf68445b0.dblock.zip.aes”
Oct 22, 2020 1:29 PM: Backend event: Put - Rename: duplicati-bbc45b87d06274ba6b2155ebdf68445b0.dblock.zip.aes (499,91 MB)
Oct 22, 2020 1:29 PM: Backend event: Put - Rename: duplicati-ba1f61ca58d3d4c6d9127ff72467b7e53.dblock.zip.aes (499,91 MB)
Oct 22, 2020 1:29 PM: Backend event: Put - Retrying: duplicati-ba1f61ca58d3d4c6d9127ff72467b7e53.dblock.zip.aes (499,91 MB)
Oct 22, 2020 1:29 PM: Operation Put with file duplicati-ba1f61ca58d3d4c6d9127ff72467b7e53.dblock.zip.aes attempt 3 of 5 failed with message: O serviço OAuth atualmente está sobrecarregado, tente novamente em algumas horas
"

Sounds like a back-end issue. What destination type are you using?

Destination: Dropbox.

Google translates your error as:

The OAuth service is currently overloaded, please try again in a few hours

Instead of their back end being overloaded, I wonder if it’s really that you’re going over some quota limit - as in you’re uploading too much data too quickly, or some other reason. I’m not familiar with Dropbox in this regard but I know some other services have limits like this.

Maybe someone else has other ideas.

Q: How do I check my Dropbox quota? would be worth a look.

Clicking on the Duplicati message might also show error path.

Going in reverse, a path might be:

Interface DropboxAPI.UploadRequest (a different API than Duplicati uses, but errors might be the same)

The most common error codes you can expect from this call are 404 (path to upload not found), 507 (user over quota), and 400 (unexpected parent rev).

Dropbox for HTTP Developers

429 Your app is making too many requests for the given user or team and is being rate limited. Your app should wait for the number of seconds specified in the “Retry-After” response header before trying again.

The Content-Type of the response can be JSON or plaintext. If it is JSON, it will be typeRateLimitError

How to resolve a “Banned links: shared link or file request not working” error message

Why am I seeing “Error 429”?

f you see “Error 429”, then the owner of the shared link or file request may have caused too much traffic.

Duplicati might not give enough info to tell if it’s a 429 or a 507. The 507 may fit an error on upload better, however clearing the Dropbox quota error by manually restarting Duplicati is odd. Look at quota anyway, however it might also be possible for Dropbox shared files going over-quota to affect the uploads as well.

If the Duplicati issue is something similar to this theory, maybe Dropbox support can figure out which hit.

I had already checked my Dropbox storage quota. I have 2TB in total 1TB used. I wonder if the problem is not with the transfer quota …
I took a new test. With my standard 2TB Dropbox Plus account and another with a 3TB Dropbox Professional account. In Dropbox professional, the error does not occur, I have already transferred more than 1Tb (and continues to transfer), without problems. This leads me to believe that the problem really lies in the transfer limitation of the Dropbox Plus account.

I will continue with my tests and any news, posted here.

Taking advantage, do you use any other plan, similar to Dropbox, that I could be using that was more stable?

Thank you so much for your help so far!

It does seem that people using OneDrive and Dropbox type backends have more issues than those using Amazon S3, Azure Blob, or Backblaze B2. Personally I’d recommend B2 - it’s extremely reliable and much cheaper than S3. But it’s probably still more than you’re paying with Dropbox. B2 charges something like USD $0.005/GB/month so 1TB would cost USD $5/mo. B2 is not available in as many regions as S3 though.

I use Google Drive, OneDrive, and had used B2. All glitched sometimes .but had been good enough.
A note about Google Drive is there’s a 750 GB daily upload limit that sometimes causes problems…

It doesn’t try forever, but you can ride through most glitches like I was getting with a combination of:

number-of-retries
retry-delay

If you actually hit some sort of Dropbox limit, there’s no telling how long it takes to end. You can ask.

I may be totally wrong, but just one suggestion which may be simple to test: Have you tried bumping the “–asynchronous-concurrent-upload-limit” from its default of 4 down to 1? I had a curious upload problem where my cloud provider (German Telekom) closed my connection when I tried “too many” concurrent uploads. After going back to a single upload process (–asynchronous-concurrent-upload-limit = 1) I’m fine again. See this thread for details.

1 Like

A big advantage that I consider Dropbox is the file history function. I can make a backup copy every day, using only one copy space, but I can restore it up to six months ago (in the PRO version).
I have already tested this setback and recovery from an old date with Duplicati and it works perfectly!
I would like a more professional solution than Dropbox, but there is this problem of not having the file history.

I applied these settings that you passed on and it seems to have worked very well! Today my link fluctuated during the backup procedure and even so, the backup managed to complete. I will run more tests to validate this solution, but it apparently worked well!
I configured to try up to 2 hours, every 30 seconds.

–number-of-retries = 240
–retry-delay = 30s

Thanks for the tip!

This makes sense Eni_Ki. I will test the previous solution a little more and if the problem occurs again, I apply your tip in a new test.

Thank you.

You can view “Complete log” → “BackendStatistics” → “RetryAttempts” if you want to see what’s used.

Even though this looked like a Dropbox rejection, I suppose link issues might somehow have contributed.

Dear,
Just to give feedback, after I implemented the TS678 tip, I had no more problems! All backups running without fail!
Any other news, comment here.