Stopped to working after the 20th job

Hi!

I succesfully uploaded 20 jobs (more than 1.5TB) to B2, but after that, the 21st job simply doesn’t work.

Or, it work, for some time, and then hangs. And after I stop/pause the job, it takes forever and does not start again.

Is there some kind of limit to the number of configured jobs?

Duplicati - 2.0.3.3_beta_2018-04-02
SO: Ubuntu 18.04-1 LTS (inside a Proxmox container)
Mono JIT compiler version 5.14.0.177 (tarball Mon Aug 6 09:07:45 UTC 2018)

Latest error message:

Fatal error
System.Threading.ThreadAbortException: Thread was being aborted.
  at (wrapper managed-to-native) System.Threading.WaitHandle.Wait_internal(intptr*,int,bool,int)
  at System.Threading.WaitHandle.WaitOneNative (System.Runtime.InteropServices.SafeHandle waitableSafeHandle, System.UInt32 millisecondsTimeout, System.Boolean hasThreadAffinity, System.Boolean exitContext) [0x00044] in <2943701620b54f86b436d3ffad010412>:0 
  at System.Threading.WaitHandle.InternalWaitOne (System.Runtime.InteropServices.SafeHandle waitableSafeHandle, System.Int64 millisecondsTimeout, System.Boolean hasThreadAffinity, System.Boolean exitContext) [0x00014] in <2943701620b54f86b436d3ffad010412>:0 
  at System.Threading.WaitHandle.WaitOne (System.Int64 timeout, System.Boolean exitContext) [0x00000] in <2943701620b54f86b436d3ffad010412>:0 
  at System.Threading.WaitHandle.WaitOne (System.Int32 millisecondsTimeout, System.Boolean exitContext) [0x00019] in <2943701620b54f86b436d3ffad010412>:0 
  at System.Threading.WaitHandle.WaitOne () [0x00000] in <2943701620b54f86b436d3ffad010412>:0 
  at Duplicati.Library.Main.BackendManager+FileEntryItem.WaitForComplete () [0x00000] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.BackendManager.List () [0x0003b] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.RemoteListAnalysis (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.String protectedfile) [0x0000d] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.FilelistProcessor.VerifyRemoteList (Duplicati.Library.Main.BackendManager backend, Duplicati.Library.Main.Options options, Duplicati.Library.Main.Database.LocalDatabase database, Duplicati.Library.Main.IBackendWriter log, System.String protectedfile) [0x00000] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.PreBackupVerify (Duplicati.Library.Main.BackendManager backend, System.String protectedfile) [0x000fd] in <ae134c5a9abb455eb7f06c134d211773>:0 
  at Duplicati.Library.Main.Operation.BackupHandler.Run (System.String[] sources, Duplicati.Library.Utility.IFilter filter) [0x003c6] in <ae134c5a9abb455eb7f06c134d211773>:0

Hello @zamana, welcome to the forum!

As far as I know there’s no limit on the number of backups, but with 21 of them all running on the same VM (wow!) I’d be a bit worried about scheduling.

I usually see the thread abort message when manually cancelling a job “right now” (not “after next upload”). While I don’t think that’s what’s going on for you, is it possible the VM is restarting (maintenance window?) while the job is running?

Try checking the Remote job log and see if anyone is being transferred to the destination. Or just look at the destination and see if any Duplicati files are there. If so, then the job is at least running and is somehow getting interrupted (or crashing).

Hello JonMikeIV!

Thanks for reply.

I forgot to mention that all of them aren’t scheduled. I’m running them manually, one by one.

But, I somewhat “fixed” the problem after:

  1. delete the job
  2. create the job again
  3. repair the database

Now the job is running smoothly.

Thanks.
Regards.

By the way, Duplicati (in despite of Mono) is very nice on hardware resources:

Thanks for letting us know what resolved they issue you - and sharing the resources graph! What monitoring tool is that? It looks nice. :slight_smile:

It’s odd that some users report terrible resource usage and others, like you, say it’s very easy on their systems. Someday we’ll hopefully figure out why that is. :thinking:

This is the native monitoring of Proxmox.

Proxmox is a virtualization server (like EXSi/VMWare), based on Debian, where you can run all your applications in containers (LXC) or in VMs. Containers are a form of lightweight VMs (very lightweight). They run with almost the performance of a bare metal application, but isolated from other components. It’s very easy to manage, once you understand and solve the users/groups/permissions issues that arise.

Regards.

In that resource chart, was Duplicati actively running a backup process, or just sitting waiting for the next backup to run? For me, the mono process consumes near 100% CPU during the backup process.

Hello @ warwickmm

Yes, Duplicati was running at that time.

Here some new screenshots that I took right now, while I’m writing this message (Oct 12, 2018 - 18:58 (GMT-3)):

[I know, I know: my upload speed is ridiculous! :frowning: ]

Here the Proxomox monitoring referring the Duplicati container:

53

Here an HTOP from inside the Duplicati container:

In this host I have containers for Plex Media Server, Sonarr, Radarr, Jackett, Monitorr, PiHole, Deluge, Duplicati and another container with an AFP sharing a virtual disk for my Time Machine backup.

Note: the system blocked me from uploading more than 3 images…

Regards.