Backups to Backblaze B2 Hanging on "Verifying Backend Data"

I have created a backup, and have specified that the destination will be Backblaze B2. I have scheduled the backup, then decided to manually run it to test for proper functionality (I am a CompTIA A+/Network+ holder so know the importance of testing backups). However, the process seems to be stuck on the “Verifying Backend Data” stage, and it has been over twenty minutes since I started the manual backup. Any advice would be helpful.

System Info from the “About” section in Duplicati:

backendgroups : {"std":{"ftp":null,"ssh":null,"webdav":null,"openstack":"OpenStack Object Storage / Swift","s3":"S3 Compatible","aftp":"FTP (Alternative)"},"local":{"file":null},"prop":{"s3":null,"azure":null,"googledrive":null,"onedrive":null,"cloudfiles":null,"gcs":null,"openstack":null,"hubic":null,"amzcd":null,"b2":null,"mega":null,"box":null,"od4b":null,"mssp":null,"dropbox":null,"jottacloud":null}}
GroupTypes : ["Local storage","Standard protocols","Proprietary","Others"]
APIVersion : 1
PasswordPlaceholder : **********
ServerVersion : 2.0.2.1
ServerVersionName : - 2.0.2.1_beta_2017-08-01
ServerVersionType : Beta
BaseVersionName : 2.0.2.1_beta_2017-08-01
DefaultUpdateChannel : Beta
DefaultUsageReportLevel : Information
ServerTime : 2018-02-03T18:30:40.823084-07:00
OSType : Linux
DirectorySeparator : /
PathSeparator : :
CaseSensitiveFilesystem : true
MonoVersion : 4.2.1
MachineName : TheLandOfMrumph
NewLine : 
CLRVersion : 4.0.30319.17020
CLROSInfo : {"Platform":"Unix","ServicePack":"","Version":"4.4.0.112","VersionString":"Unix 4.4.0.112"}
ServerModules : []
UsingAlternateUpdateURLs : false
LogLevels : ["Profiling","Information","Warning","Error"]
SuppressDonationMessages : true
SpecialFolders : [{"ID":"%MY_DOCUMENTS%","Path":"/root"},{"ID":"%HOME%","Path":"/root"}]
BrowserLocale : {"Code":"en-US","EnglishName":"English (United States)","DisplayName":"English (United States)"}
SupportedLocales : [{"Code":"de","EnglishName":"German","DisplayName":"Deutsch"},{"Code":"en","EnglishName":"English","DisplayName":"English"},{"Code":"es","EnglishName":"Spanish","DisplayName":"español"},{"Code":"fi","EnglishName":"Finnish","DisplayName":"suomi"},{"Code":"fr","EnglishName":"French","DisplayName":"français"},{"Code":"nl-NL","EnglishName":"Dutch (Netherlands)","DisplayName":"Nederlands (Nederland)"},{"Code":"pl","EnglishName":"Polish","DisplayName":"polski"},{"Code":"ru","EnglishName":"Russian","DisplayName":"русский"},{"Code":"zh-CN","EnglishName":"Chinese (Simplified)","DisplayName":"中文 (中国)"}]
BrowserLocaleSupported : true
 Backend modules: aftp 
amzcd 
azure 
b2 
box 
cloudfiles 
dropbox 
ftp 
file 
googledrive 
gcs 
hubic 
jottacloud 
mega 
onedrive 
openstack 
s3 
ssh 
od4b 
mssp 
tahoe 
webdav 

Compression modules: zip 
7z 

Encryption modules: aes 
gpg

Hi @TylerK93, welcome to the forum!

When you manually ran the backup, was it the FIRST time the backup had been run?

If it’s still running, you could try looking at the main “Show log” page, then selecting “Live” and “Profiling” to see exactly what Duplicati is doing.

(By the way, I edited your post by adding “~~~” before and after your “About” data to make it more readable.)

Yes, this is the first time the backup has been run. It still appears to be hung, nearly three hours later. Here is the current info from my profiling log:


 
Feb 3, 2018 7:31 PM: UsageReporter failed


 
Feb 3, 2018 7:31 PM: UsageReporter failed


 
Feb 3, 2018 7:31 PM: UsageReporter failed


 
Feb 3, 2018 7:26 PM: Backend event: List - Started: ()


 
Feb 3, 2018 7:26 PM: Starting - RemoteOperationList


 
Feb 3, 2018 7:26 PM: Starting - PreBackupVerify


 
Feb 3, 2018 7:26 PM: ExecuteReader: SELECT DISTINCT "Fileset"."ID", "Fileset"."Timestamp" FROM "Fileset", "RemoteVolume" WHERE "RemoteVolume"."ID" = "Fileset"."VolumeID" AND "Fileset"."ID" IN (SELECT "FilesetID" FROM "FilesetEntry") AND ("RemoteVolume"."State" = "Uploading" OR "RemoteVolume"."State" = "Temporary") took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteReader: SELECT DISTINCT "Fileset"."ID", "Fileset"."Timestamp" FROM "Fileset", "RemoteVolume" WHERE "RemoteVolume"."ID" = "Fileset"."VolumeID" AND "Fileset"."ID" IN (SELECT "FilesetID" FROM "FilesetEntry") AND ("RemoteVolume"."State" = "Uploading" OR "RemoteVolume"."State" = "Temporary")


 
Feb 3, 2018 7:26 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM "File" WHERE "BlocksetID" != ? AND "BlocksetID" != ? AND NOT "BlocksetID" IN (SELECT "BlocksetID" FROM "BlocksetEntry") took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM "File" WHERE "BlocksetID" != ? AND "BlocksetID" != ? AND NOT "BlocksetID" IN (SELECT "BlocksetID" FROM "BlocksetEntry")


 
Feb 3, 2018 7:26 PM: ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual") took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteScalarInt64: SELECT COUNT(*) FROM (SELECT * FROM (SELECT "N"."BlocksetID", (("N"."BlockCount" + 3200 - 1) / 3200) AS "BlocklistHashCountExpected", CASE WHEN "G"."BlocklistHashCount" IS NULL THEN 0 ELSE "G"."BlocklistHashCount" END AS "BlocklistHashCountActual" FROM (SELECT "BlocksetID", COUNT(*) AS "BlockCount" FROM "BlocksetEntry" GROUP BY "BlocksetID") "N" LEFT OUTER JOIN (SELECT "BlocksetID", COUNT(*) AS "BlocklistHashCount" FROM "BlocklistHash" GROUP BY "BlocksetID") "G" ON "N"."BlocksetID" = "G"."BlocksetID" WHERE "N"."BlockCount" > 1) WHERE "BlocklistHashCountExpected" != "BlocklistHashCountActual")


 
Feb 3, 2018 7:26 PM: ExecuteScalarInt64: SELECT Count(*) FROM (SELECT DISTINCT "BlocksetID", "Index" FROM "BlocklistHash") took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteScalarInt64: SELECT Count(*) FROM (SELECT DISTINCT "BlocksetID", "Index" FROM "BlocklistHash")


 
Feb 3, 2018 7:26 PM: ExecuteScalarInt64: SELECT Count(*) FROM "BlocklistHash" took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteScalarInt64: SELECT Count(*) FROM "BlocklistHash"


 
Feb 3, 2018 7:26 PM: ExecuteReader: SELECT "CalcLen", "Length", "A"."BlocksetID", "File"."Path" FROM ( SELECT "A"."ID" AS "BlocksetID", IFNULL("B"."CalcLen", 0) AS "CalcLen", "A"."Length" FROM "Blockset" A LEFT OUTER JOIN ( SELECT "BlocksetEntry"."BlocksetID", SUM("Block"."Size") AS "CalcLen" FROM "BlocksetEntry" LEFT OUTER JOIN "Block" ON "Block"."ID" = "BlocksetEntry"."BlockID" GROUP BY "BlocksetEntry"."BlocksetID" ) B ON "A"."ID" = "B"."BlocksetID" ) A, "File" WHERE "A"."BlocksetID" = "File"."BlocksetID" AND "A"."CalcLen" != "A"."Length" took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteReader: SELECT "CalcLen", "Length", "A"."BlocksetID", "File"."Path" FROM ( SELECT "A"."ID" AS "BlocksetID", IFNULL("B"."CalcLen", 0) AS "CalcLen", "A"."Length" FROM "Blockset" A LEFT OUTER JOIN ( SELECT "BlocksetEntry"."BlocksetID", SUM("Block"."Size") AS "CalcLen" FROM "BlocksetEntry" LEFT OUTER JOIN "Block" ON "Block"."ID" = "BlocksetEntry"."BlockID" GROUP BY "BlocksetEntry"."BlocksetID" ) B ON "A"."ID" = "B"."BlocksetID" ) A, "File" WHERE "A"."BlocksetID" = "File"."BlocksetID" AND "A"."CalcLen" != "A"."Length" 


 
Feb 3, 2018 7:26 PM: ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 


 
Feb 3, 2018 7:26 PM: ExecuteReader: SELECT "Key", "Value" FROM "Configuration" took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteReader: SELECT "Key", "Value" FROM "Configuration" 


 
Feb 3, 2018 7:26 PM: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) 


 
Feb 3, 2018 7:26 PM: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) 


 
Feb 3, 2018 7:26 PM: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) 


 
Feb 3, 2018 7:26 PM: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) 


 
Feb 3, 2018 7:26 PM: ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) took 00:00:00.000


 
Feb 3, 2018 7:26 PM: Starting - ExecuteNonQuery: INSERT INTO "Configuration" ("Key", "Value") VALUES (?, ?) 


 
Feb 3, 2018 7:26 PM: ExecuteNonQuery: DELETE FROM "Configuration" took 00:00:00.000

And here is my “Server State Properties” from the About page:


lastEventId : 86
lastDataUpdateId : 6
lastNotificationUpdateId : 0
estimatedPauseEnd : 0001-01-01T00:00:00
activeTask : {"Item1":2,"Item2":"1"}
programState : Running
lastErrorMessage : 
connectionState : connected
xsfrerror : false
connectionAttemptTimer : 0
failedConnectionAttempts : 0
lastPgEvent : {"BackupID":"1","TaskID":2,"BackendAction":"List","BackendPath":null,"BackendFileSize":-1,"BackendFileProgress":0,"BackendSpeed":-1,"CurrentFilename":null,"CurrentFilesize":0,"CurrentFileoffset":0,"Phase":"Backup_PreBackupVerify","OverallProgress":0,"ProcessedFileCount":0,"ProcessedFileSize":0,"TotalFileCount":17208,"TotalFileSize":1464635669,"StillCounting":false}
updaterState : Waiting
updatedVersion : 
updateReady : false
updateDownloadProgress : 0
proposedSchedule : [{"Item1":"1","Item2":"2018-02-05T04:00:00Z"}]
schedulerQueueIds : [{"Item1":3,"Item2":"1"}]
pauseTimeRemain : 0

I don’t know where else I can look for any logs to troubleshoot this issue. Thanks in advance for your assistance.

Thanks for the details. I don’t know if it’s the cause of your problem but the I know other users have had problems with the UsageReporter in the past - often no Linux based systems. If you click on the actual “UsageReporter failed” message it should expand and show details about the error.

If you plan to abort and try again then I’d suggest using going to the main Settings page and setting the “Usage statistics” to None / disabled to turn off that functionality - I’m not sure that it will fix your issue, but it couldn’t hurt.

Really? Disabling usage reporting “fixed” it for you?

Odd - looking at the code I wouldn’t think it would have made a difference, but as long as it’s working for you…yay! :smiley:

My logs are now showing “usage reporter failed” constantly - i don’t know how long that’s been going on but i think it’s new. Is it a new issue or something?

It’s not new, but generally should not cause issues with functionality. It’s also not necessarily common (I don’t recall seeing out other than during failure tests) - then again most people won’t be looking at their logs.

My 100% uneducated guess is that the usage reporting sever might be a bit under-powered for the amount of load it’s getting.

I think additional error reporting code may be necessary to track down the cause of this, though to date I’ve mostly seen it with users having DNS or proxy issues.

1 Like