B2 quota exceeded corrupts Duplicati database

I have just started using Duplicati with B2 (free tier just to test). Initial backup of went well. After about the third backup on one day the 1 GB free download quota is exceeded:

Failed to process file duplicati-20180223T230000Z.dlist.zip.aes => System.Exception: 403 - download_cap_exceeded: Cannot download file, download bandwidth or transaction (Class B) cap exceeded. See the Caps & Alerts page to increase your cap.
   at Duplicati.Library.Main.BackendManager.GetForTesting(String remotename, Int64 size, String hash)

After this, every backup fails with “Unexpected number of remote volumes detected: 0”. In the log these are preceeded by lots of SQLite UNIQUE constraint errors.

This first happened yesterday. After a delete and rebuild of the database, backups started working again but after a frew backups quota is exceeded again and backups start to fail again.

Hi @Kevin and welcome to the forum.

What you should keep in mind is the fact, that at standard settings duplicati downloads at least one block of your data to check for errors. This is quite a good thing as long as you do not make them big.
If you set the upload block size at 250 MB it will download one block this size at the end of every run to verify its consistence. (would add up to 1 GB for 4 runs).

You can a) supress this check or b) set the ubload block size to a smaller size to reduce the amount of data downloaded on every run.

That’s for that. About the error you receive: does it go away once midnight passes and the quota is reset?

Hi @cpo. I agree that the download to check for errors is a good thing. And I understand why that is leading to exceeding the quota in my test scenario.

Once midnight (in California) passes the quota error goes away. But my backup is still broken because the local database is now corrupted. To recover I have to delete and rebuild the database.

Something goes wrong when the quota is exceeded and then in the next backup I get a:
SQLiteException (0x80004005): disk I/O error
Followed by quite a few
System.NullReferenceException: Object reference not set to an instance of an object
And at the end:

2018-02-24 10:20:02Z - Error: Fatal error
System.Exception: Unexpected number of remote volumes detected: 0!
   at Duplicati.Library.Main.Database.LocalDatabase.UpdateRemoteVolume(String name, RemoteVolumeState state, Int64 size, String hash, Boolean suppressCleanup, TimeSpan deleteGraceTime, IDbTransaction transaction)
   at Duplicati.Library.Main.Database.LocalDatabase.UpdateRemoteVolume(String name, RemoteVolumeState state, Int64 size, String hash, Boolean suppressCleanup, IDbTransaction transaction)
   at Duplicati.Library.Main.Operation.BackupHandler.FinalizeRemoteVolumes(BackendManager backend)
   at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter)

2018-02-24 10:20:02Z - Error: Rollback error: unknown error
No transaction is active on this connection
System.Data.SQLite.SQLiteException (0x80004005): unknown error
No transaction is active on this connection
   at System.Data.SQLite.SQLiteTransactionBase.IsValid(Boolean throwError)
   at System.Data.SQLite.SQLiteTransactionBase.Rollback()
   at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter)

2018-02-24 10:20:02Z - Error: Failed disposing index volume
System.InvalidOperationException: Attempted to dispose an index volume that was being written
   at Duplicati.Library.Main.Volumes.IndexVolumeWriter.Dispose()
   at Duplicati.Library.Main.Operation.BackupHandler.Dispose()

I deleted and rebuilt the database this morning and backups ran correctly. After 4 backups I exceeded the quota again and the following backup failed with the same exceptions above and the database has to be deleted and rebuilt again before backups can succeed again.

Are you by any chance a crashplan refugee and have it still running?
Something sounding similar happened here:

Edit: related post:

If not: are you hitting any class B or C caps on B2?

I would try to replicate this, but i am running a quite big backup job this weekend and do not want to interrupt it. Perhaps one of the other guys here can shed a light?

Yes, I meant to get rid of crashplan but forgot. I’ll try and reproduce it tomorrow without crashplan running.

No problem with class B or C caps so far - plenty of margin there.

Meanwhile i’ll try to hit daily cap at B2 (got finished early).

Edit: Ok, got there:

Errors: [
Failed to process file duplicati-20180224T192559Z.dlist.zip => 403 - download_cap_exceeded: Cannot download file, download bandwidth or transaction (Class B) cap exceeded. See the Caps & Alerts page to increase your cap.,

So, i’ve hit the backup button 3 more times after the error above.
But all i get is the same error message about download cap exceeded over and over again.
Nothing more showing up.

Will try after midnight though and look what happens then…

1 Like

Ok. Through midnight here in Europe :grin:

Quota is gone, i tried again and the backup continued sucessfully without any further errors. So we might have to look further.

One thought (but just a wild guess) - did you set your bucket retention police on B2 to Keep only latest version ? Maybe that interferes a bit?

Yesterday I had a backup fail at 00:00 CET due to exceeding my quota. Backblaze is in California, so I think the quota resets at 00:00 PST (09:00 CET).

Anyway, this morning I disabled the crashplan service and I have not been able to reproduce the problem. The first backup that exceeds the quota gets the download_cap_exceeded exception and fails with a popup in the GUI saying “Got 0 warning(s)” (but I’ve seen this in the forum so it’s a known issue). After this, further backups fail in the same way (as expected). Even after restarting the crashplan service this is the case.

So after disabling the crashplan service I have not been able to reproduce the problem I saw previously where the database gets corrupted and backups immediately fail. I will try again tomorrow with the crashplan service running at the point where quota is exceeded and see what happens.

My bucket retention policy is set to Keep only latest version and I believe that is correct as Duplicati manages the versioning.

Good news, we‘re locating the bad guy :slight_smile:

And you’re right about this:

This is the preferred way to go.

Yesterday I had the SQLiteException (0x80004005): disk I/O error followed by backups failing with Unexpected number of remote volumes detected; 0! on a backup to hubiC (so no quota problem) with the crashplan service disabled.

I have 3 backups scheduled, all from AppData, one to B2, one to hubiC and one to pCloud. So one thing comes to mind: they are all trying to backup the directory with the databases. But the backups shouldn’t run concurrently and from the logs it appears that they are not. But I guess they will try to back up “their own” databases.

I have now moved the databases to a different drive.

If the folder containing your Duplicati databases is part of the backup then yes, the database for the active backup will be included in the backup.

Unless you’re using snapshots (such as VSS on Windows) this should cause an “access denied” type error for that specific file, but shouldn’t cause any corruption.

Assuming you can actually see duplicati* files at your hubiC destination, this is probably a transient issue with hubiC reporting what files are in the destination folder and is likely not related to the SQLiteException you got.

Has the 0 volumes error happened again?

I’m on Windows but not using VSS. I have never seen an “access denied” type error for one of the main database files, only for an sqlite-journal file.

The 0 volumes error hasn’t happened since I moved the databases to a different drive. Until now it always started with an SQLite exception and thereafter all runs of that job fail with the 0 volumes error and a constraint violation in the log.

Now I’m seeing 2 other problems:

  1. The source machine is set to sleep so it often misses backups while asleep. When I start the machine and log on again each missed backup job runs twice in succession.
  2. The Duplicati.GUI.TrayIcon process is eating around 17% CPU looping with the following message:
Feb 27, 2018 6:01 PM: Scheduler failed to find next date
System.Exception: Unable to find a valid date, given the start date 1/1/0001 12:00:00 AM, the repetition interval 2h and the allowed days Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday
   at Duplicati.Server.Scheduler.GetNextValidTime(DateTime basetime, DateTime firstdate, String repetition, DayOfWeek[] allowedDays)
   at Duplicati.Server.Scheduler.Runner()

Stopping/starting the tray icon seems to have helped.

I haven’t seen that error before. If you look at step 4 (Schedule) of editing your job what date is in the “Next time” field?


Can you tell if it’s starting both runs of the job from the beginning or is it perhaps finishing one ran that started before sleep mode then starting another run? Based on how I understand the job queue I could see that happening…

Note that if you go to the About menu, select the “System info” button and scroll down to the bottom of the page you should see at “Server state properties” section. In there should be proposedSchedule (what should happen eventually but isn’t actively in the queue, like the NEXT scheduled run of a job that is currently running) and schedulerQueueIds (what’s currently in the queue to be run).

Server state properties
lastEventId : 1866
lastDataUpdateId : 4
lastNotificationUpdateId : 1
estimatedPauseEnd : 0001-01-01T00:00:00
activeTask :
programState : Running
lastErrorMessage :
connectionState : connected
xsfrerror : false
connectionAttemptTimer : 0
failedConnectionAttempts : 0
lastPgEvent :
updaterState : Waiting
updatedVersion : 2.0.2.19_canary_2018-02-12
updateReady : false
updateDownloadProgress : 0
proposedSchedule : []
schedulerQueueIds : []
pauseTimeRemain : 0

What I suspect is happening is that the job is taking so long to run (in this case due to being paused by sleep mode) that the next scheduled start time passes causing it to “run twice”.

In one of my 3 scheduled jobs the Next time field is 01:00 0001-01-01, so that will be it. But I didn’t set it that way. Looking in the logs, the last action logged for that job was a database repair run yesterday and since then no scheduled runs.

Here is an example with 2 scheduled jobs. Text copied from the emails sent by job 1:

EndTime: 27/02/2018 10:19:35
BeginTime: 27/02/2018 10:15:11
EndTime: 27/02/2018 10:26:50
BeginTime: 27/02/2018 10:23:23

And from job 2:

EndTime: 27/02/2018 10:29:09
BeginTime: 27/02/2018 10:27:00
EndTime: 27/02/2018 10:23:14
BeginTime: 27/02/2018 10:19:46

So I think after the computer woke up first a missed job 1 ran 10:15-10:19, then a missed job 2 10:19-10:23, then another job 1 10:23-10:26 and then another job 2 10:27-10:29.

In the log from job 1:

2018-02-27 11:15:11Z - Profiling: Starting - Running Backup
2018-02-27 11:19:45Z - Profiling: Running Backup took 00:04:34.366
2018-02-27 11:23:23Z - Profiling: Starting - Running Backup
2018-02-27 11:26:59Z - Profiling: Running Backup took 00:03:35.671

Which brings another question: the time format from the log implies UTC and I would expect the time in the email to be local time (Europe/Amsterdam) for me. I just received an email from the backup that says EndTime: 27/02/2018 18:27:14, the log says 19:27:23Z and email received at 19:27. So the log is local time and the email is UTC.

This is a known issue and I believe it’s been resolved in more recent canary versions (sorry, I don’t see what version you’re using).


I’ve seen that value in the “Next time” field before, but it’s been a while and I’m not sure what I was doing when I saw it. (I suspect it may be a side effect of importing a scheduled job, but haven’t verified that).

Does putting a real date / time in that field resolve the Unable to find a valid date error / looping? I doubt it will help with the 17% CPU usage, but let me know if it does.


The way the job queue is written it’s supposed to check and NOT add a job if it’s ID is already in the queue.

Perhaps the “1/1/0001 12:00:00 AM” time was screwing up the queue code. If this is the cause then putting real time in there should resolve the issue - please let us know if it does so we can try to fix this failure scenario


Where you able to get any info from the “Server state properties” section in About → “Server info”?

So far I haven’t reproduced either issue. After killing the TrayIcon process and restarting it I haven’t seen any high CPU usage apart from when it’s running backups. The message ‘Unable to find a valid date’ now only occurs 4 times and then stops for about 5 minutes, where last night I had hundreds within the same minute.

Also, the backup running twice happened on the first wake of the machine this morning, but not since then. So I don’t have any information on how the “Server state properties” looks when it happens. I will try to look tomorrow morning.

Backups scheduled multiple times occurred again this morning. Machine went to sleep around 22:45 UTC on 28/2. It woke at 02:00 UTC on 1/3 for a Windows scheduled task and went back to sleep. At around 05:45 UTC on 1/3 I woke the machine and logged on. The first missed backup A started immediately. After it started the queue looked like this:

proposedSchedule : [{"Item1":"5","Item2":"2018-03-01T06:10:00Z"},{"Item1":"6","Item2":"2018-03-01T06:20:00Z"}]
schedulerQueueIds : [{"Item1":225,"Item2":"6"}]

Immediately after backup A finished, backup B started and then the queue was:

proposedSchedule : [{"Item1":"5","Item2":"2018-03-01T06:10:00Z"},{"Item1":"6","Item2":"2018-03-01T06:20:00Z"}]
schedulerQueueIds : [{"Item1":262,"Item2":"5"}]

Immedately after backup B finished, backup A started again and then the queue was:

proposedSchedule : [{"Item1":"5","Item2":"2018-03-01T06:10:00Z"},{"Item1":"6","Item2":"2018-03-01T06:20:00Z"}]
schedulerQueueIds : [{"Item1":266,"Item2":"6"}]

Immediately after backup A finished, backup B started again and then the queue was:

proposedSchedule : [{"Item1":"6","Item2":"2018-03-01T06:20:00Z"},{"Item1":"5","Item2":"2018-03-01T08:10:00Z"}]
schedulerQueueIds : [{"Item1":268,"Item2":"5"}]

Backup A is scheduled to run at 10 past the hour, every 2 hours; backup B at 20 past the hour, every 2 hours. When the machine woke up at 05:45 UTC, the next regular scheduled backup A would have been 06:10 UTC and backup B 06:20 UTC (ignoring the re-scheduled backups missed while the machine was asleep).

Firstly, I’m going to say we’ve gotten quite a bit off the original topic title (“B2 quota exceeded…”) - did that ever get resolved? Should we break this chunk into it’s own thread?

Secondly, thanks for the detailed info! I’m working through understanding this myself here, so please bear with me while I try to restate what you said (to make sure I’ve got it right).

  • Job A (aka ID 5) is scheduled to run at 10 min. after every EVEN hour
  • Job B (aka ID 6) is scheduled to run at 20 min. after every EVEN hour
  • at 04:10 job ID 5 wanted to run but couldn’t because machine was off
  • at 04:20 job ID 6 wanted to run but couldn’t because machine was off
  • at 05:45 machine was turned on and missed 04:10-job-id-5 run started
  • at ??:?? late-started 04:10-job-id-5 job finished and missed 04:20-job-id-6 job started
  • at ??:?? (assumed after 06:10) late-started 04:20-job-id-6 finished and now-missed 06:10-job-id-5 started
  • at ??:?? (assumed after 06:20) late-started 06:10-job-id-5 finished and now-missed 06:20-job-id-6 started
  • at ??:?? (assumed BEFORE 08:10) late-started 06:20-job-id-6 finished and normal schedule resumes
  • bonus: If late-started 06:20-job-id-6 were to finish AFTER 08:10 there would be a 3rd set of “job ran again immediately after finishing”

So I think I understand how it’s happening and it’s not really a bug, it’s just how the queuing process is written.

That being said I don’t really have any solution to the issue other than possibly use a --run-script-before-required script to disallow a run if the previous one was to recent. I’d also like to ask @kenkendk if it would make sense to support a “don’t run if last successful run finished less than X min. ago” setting.

No. I stopped running my B2 backup for now but I think it is a reproducible issue that when the B2 download quota is exceeded the database gets corrupted and further runs of that job fail until the database is re-created.

Yes, I think is a good idea. I’m not sure exactly what to do though.

As I was writing my last reply I realized that the timing was going to cause confusion. I watched it run multiple times again this morning but I will leave those details for a new thread.