Verifying Backend Data

#41

Hey everyone, same issue here.

I made a huge backup set of all my local drives, which totals to about 8TB of data in 2 million files (I’m guessing that was big a mistake, would splitting each drive into a set make things take less time in total?). By this point I’ve tried setting up backups in different ways in multiple versions of Duplicati, and run into a few problems each time (VSS breaking, IO bottlenecks, backing up through SMB eventually locking my server up, etc…).

Right now I’m using 2.0.4.4_canary_2018-11-14 on a 7700k with 16GB of ram running Windows 7 x64, with the database on an SSD and the temp folder on a mechanical disk.

Settings

--tempdir=N:\Duplicati\Temp
--asynchronous-upload-limit=1
--thread-priority=belownormal
--use-background-io-priority=true
--use-block-cache=true
--disable-file-scanner=true
--synchronous-upload=true
--disable-filelist-consistency-checks=true
--log-file=N:\Duplicati\log.txt
--log-file-log-level=Profiling
--backup-name=Full
--dbpath=C:\Users\Daniel\AppData\Roaming\Duplicati\NNHNWSPLGF.sqlite
--encryption-module=aes
--compression-module=zip
--dblock-size=5GB
--snapshot-policy=Required
--default-filters=Windows
--no-backend-verification=false
--disable-module=console-password-input
--exclude=D:\snapraid.content
--exclude=E:\Uplay\
--exclude=E:\Bethesda\
--exclude=E:\Battle.net\
--exclude=E:\Epic Games\
--exclude=E:\Steam\
--exclude=I:\snapraid.content
--exclude=E:\snapraid.content
--exclude=P:\snapraid.parity
--exclude=P:\snapraid.content

Duplicati Profiling info

(after 3 days on “Verifying backend data…”)

Dec 8, 2018 1:03 PM: Cannot open WMI provider \\localhost\root\virtualization. Hyper-V is probably not installed.
Dec 8, 2018 1:03 PM: Cannot find any MS SQL Server instance. MS SQL Server is probably not installed.
Dec 8, 2018 1:03 PM: Using WMI provider \\localhost\root\virtualization
Dec 6, 2018 11:54 PM: Starting - ExecuteReader: SELECT "A"."Hash", "C"."Hash" FROM (SELECT "BlocklistHash"."BlocksetID", "Block"."Hash", * FROM "BlocklistHash","Block" WHERE "BlocklistHash"."Hash" = "Block"."Hash" AND "Block"."VolumeID" = 732) A, "BlocksetEntry" B, "Block" C WHERE "B"."BlocksetID" = "A"."BlocksetID" AND "B"."Index" >= ("A"."Index" * 3200) AND "B"."Index" < (("A"."Index" + 1) * 3200) AND "C"."ID" = "B"."BlockID" ORDER BY "A"."BlocksetID", "B"."Index"
Dec 6, 2018 11:54 PM: ExecuteReader: SELECT DISTINCT "Hash", "Size" FROM "Block" WHERE "VolumeID" = 732 took 0:00:00:00.002
Dec 6, 2018 11:54 PM: Starting - ExecuteReader: SELECT DISTINCT "Hash", "Size" FROM "Block" WHERE "VolumeID" = 732

Backup log

MainOperation: Repair
RecreateDatabaseResults: null
ParsedResult: Warning
Version: 2.0.4.4 (2.0.4.4_canary_2018-11-14)
EndTime: 25/11/2018 05:03:59 (1543129439)
BeginTime: 25/11/2018 04:40:33 (1543128033)
Duration: 00:23:26.4120000
Messages: [
    2018-11-25 04:40:33 -02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started,
    2018-11-25 04:41:16 -02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  (),
    2018-11-25 04:41:24 -02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (169 bytes),
    2018-11-25 04:41:24 -02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-20181121T053703Z.dlist.zip.aes,
    2018-11-25 04:41:25 -02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-20181122T224135Z.dlist.zip.aes,
...
]
Warnings: [
    2018-11-25 04:40:33 -02 - [Warning-Duplicati.Library.Main.Controller-UnsupportedOption]: The supplied option --default-filters is not supported and will be ignored
]
Errors: []
BackendStatistics:
    RemoteCalls: 1
    BytesUploaded: 0
    BytesDownloaded: 0
    FilesUploaded: 0
    FilesDownloaded: 0
    FilesDeleted: 0
    FoldersCreated: 0
    RetryAttempts: 0
    UnknownFileSize: 0
    UnknownFileCount: 0
    KnownFileCount: 169
    KnownFileSize: 451490351253
    LastBackupDate: 01/01/0001 00:00:00 (-62135589600)
    BackupListCount: 0
    TotalQuotaSpace: 17433015040055
    FreeQuotaSpace: 6437898762295
    AssignedQuotaSpace: -1
    ReportedQuotaError: False
    ReportedQuotaWarning: False
    ParsedResult: Warning
    Version: 2.0.4.4 (2.0.4.4_canary_2018-11-14)
    Messages: [
        2018-11-25 04:40:33 -02 - [Information-Duplicati.Library.Main.Controller-StartingOperation]: The operation Repair has started,
        2018-11-25 04:41:16 -02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Started:  (),
        2018-11-25 04:41:24 -02 - [Information-Duplicati.Library.Main.BasicResults-BackendEvent]: Backend event: List - Completed:  (169 bytes),
        2018-11-25 04:41:24 -02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-20181121T053703Z.dlist.zip.aes,
        2018-11-25 04:41:25 -02 - [Information-Duplicati.Library.Main.Operation.FilelistProcessor-RemoteUnwantedMissingFile]: removing file listed as Temporary: duplicati-20181122T224135Z.dlist.zip.aes,
...
    ]
    Warnings: [
        2018-11-25 04:40:33 -02 - [Warning-Duplicati.Library.Main.Controller-UnsupportedOption]: The supplied option --default-filters is not supported and will be ignored
    ]
    Errors: []

Full report of the steps I took

Maybe (just maybe) I’m being overly verbose here, but since I don’t know what caused this, might aswell right?

I started the backup for the first time and it was running fine for a day or two. I then had to reboot my computer for an unrelated reason. Before doing so, I clicked on “stop after upload”, but it kept going for a few hours longer than I could wait, so I clicked “stop now” and after it actually stopped I closed Duplicati through the tray icon. After the reboot, when trying to start the backup again, it failed with this error:

Duplicati.Library.Interface.UserInformationException: Unexpected difference in fileset version 1: 21/11/2018 03:37:03 (database id: 2), found 516559 entries, but expected 582184
   em Duplicati.Library.Main.Database.LocalDatabase.VerifyConsistency(Int64 blocksize, Int64 hashsize, Boolean verifyfilelists, IDbTransaction transaction)
   em Duplicati.Library.Main.Operation.Backup.BackupDatabase.<>c__DisplayClass32_0.<VerifyConsistencyAsync>b__0()
   em Duplicati.Library.Main.Operation.Common.SingleRunner.<>c__DisplayClass3_0.<RunOnMain>b__0()
   em Duplicati.Library.Main.Operation.Common.SingleRunner.<DoRunOnMain>d__2`1.MoveNext()
--- Fim do rastreamento de pilha do local anterior onde a exceção foi gerada ---
   em System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   em System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   em System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)
   em Duplicati.Library.Main.Operation.BackupHandler.<RunAsync>d__19.MoveNext()
--- Fim do rastreamento de pilha do local anterior onde a exceção foi gerada ---
   em CoCoL.ChannelExtensions.WaitForTaskOrThrow(Task task)
   em Duplicati.Library.Main.Controller.<>c__DisplayClass13_0.<Backup>b__0(BackupResults result)
   em Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, IFilter& filter, Action`1 method)
   em Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
   em Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)

Then, I clicked on “Repair Database” and after that the backup started again. This time, the “Verifying backend files…” part took about 24 hours, but after that it started backing up again. Not sure if this is relevant to this issue, but it was showing negative files and bytes in the progress bar, probably due to --disable-file-scanner=true.

The problem is, 8TB is a lot of data, and something will probably go wrong in the process of uploading all that. I had a power outage after about 1 week and 2TB uploaded, and now when starting the backup Duplicati got stuck on “Verifying backend files…” for 3 days, using 13% of CPU (1 full core). It was definetly doing something (or at least trying), as I could see activity on the db file in Resource Monitor. I tried opening the database in DB Browser after I stopped the backup to manually run that query, but it says its encrypted and my backup key doesn’t seem to be the same key for the database.

I’ve since set --no-backend-verification=true and tried again, but it seems stuck on that same query.

Server state properties

profiling-log.zip (3.0 KB)

lastEventId : 22
lastDataUpdateId : 5
lastNotificationUpdateId : 0
estimatedPauseEnd : 0001-01-01T00:00:00
activeTask : {"Item1":3,"Item2":"3"}
programState : Running
lastErrorMessage :
connectionState : connected
xsfrerror : false
connectionAttemptTimer : 0
failedConnectionAttempts : 0
lastPgEvent : {"BackupID":"3","TaskID":3,"BackendAction":"Delete","BackendPath":"duplicati-bed7b0f009ef5455187881debe4895f2d.dblock.zip.aes","BackendFileSize":-1,"BackendFileProgress":0,"BackendSpeed":-1,"BackendIsBlocking":false,"CurrentFilename":null,"CurrentFilesize":0,"CurrentFileoffset":0,"Phase":"Backup_PreBackupVerify","OverallProgress":0,"ProcessedFileCount":0,"ProcessedFileSize":0,"TotalFileCount":1606917,"TotalFileSize":2273880812063,"StillCounting":false}
updaterState : Waiting
updatedVersion :
updateReady : false
updateDownloadProgress : 0
proposedSchedule : [{"Item1":"3","Item2":"2018-12-17T10:00:00Z"}]
schedulerQueueIds : []
pauseTimeRemain : 0

Is this “verification” step something that will be performed every time or something that happens only in specific cases like this?

Also, would setting a bigger block size help?

"Unexpected difference in fileset" errors [2.0.3.9]
#42

Hello @r0zbot, welcome to the forum!

You’ve got a number of things listed there that it sounds like you might be concerned about but only a few specific questions so hopefully we cover everything your wanting. :slight_smile:

I believe the “Cannot open WMI provider” and “Cannot find any MS SQL Server instance.” are both notification messages and can be ignored unless you really do have Hyper-V or SQL Server installed.

The " ```
The supplied option --default-filters is not supported and will be ignored" warning is telling you that parameter has been deprecated. Unfortunately, it doesn’t tell you where to look for the replacement which is now in the"Exclude Filter Group" feature (Release: 2.0.3.5 (canary) 2018-04-13). This may also be why the file scanning too longer after the update (it started including previously ignored system files.)

As for the --no-backend-verification=true parameter, that should do what you want so I’m guessing there’s a different step causing the slowdown that isn’t logging itself correctly.

While a larger block size would likely improve performance (fewer database records) that is one of the few parameters that can’t be changed once a breakup has been created. :frowning:

Breaking your backup into smaller chunks would probably give you a better performance boost, but of course that means starting over so it’s kinda painful.

#43

Sorry, I just dumped everything since I didn’t know what was relevant. I’ve already given up on that backup set because it was stuck for days on that same query:

ExecuteReader: SELECT "A"."Hash", "C"."Hash" FROM (SELECT "BlocklistHash"."BlocksetID", "Block"."Hash", * FROM "BlocklistHash","Block" WHERE "BlocklistHash"."Hash" = "Block"."Hash" AND "Block"."VolumeID" = 732) A, "BlocksetEntry" B, "Block" C WHERE "B"."BlocksetID" = "A"."BlocksetID" AND "B"."Index" >= ("A"."Index" * 3200) AND "B"."Index" < (("A"."Index" + 1) * 3200) AND "C"."ID" = "B"."BlockID" ORDER BY "A"."BlocksetID", "B"."Index"

I’ve since increased the block size to 20mb and created a new backup from scratch, and what I would like to know is what exactly caused the issue and how to make it behave properly in the future when something interrupts it.

#44

Duplicati is usually quite good at recovering from interruptions - my guess is your local database just got so large more time was spent checking for pre-existing blocks than actually backing them up.

A 20MB block size will probably make that issue go away as the database should have about 20x fewer records now. Not that deduplication becomes less effective as block size increases so you may see more destination storage usage than before as a result.

#45

Hi I am seeing similar issues (GUI stuck at Verifying Backend Data, system status at “Phase”:“Backup_PreBackupVerify”, unresponsive stop button), after going to 2.0.4.5. I have to add that this does not happen if you do any backups immediately after the upgrade; it will happen only after the machine got restarted in any manner (be it power outage or normal shutdown). I was able to successfully run a Repair operation on the backup’s database.

I got these from the logs:

> * Dec 11, 2018 9:29 AM: Starting - ExecuteReader: SELECT "A"."Hash", "C"."Hash" FROM (SELECT "BlocklistHash"."BlocksetID", "Block"."Hash", * FROM "BlocklistHash","Block" WHERE "BlocklistHash"."Hash" = "Block"."Hash" AND "Block"."VolumeID" = 97426) A, "BlocksetEntry" B, "Block" C WHERE "B"."BlocksetID" = "A"."BlocksetID" AND "B"."Index" &gt;= ("A"."Index" * 3200) AND "B"."Index" &lt; (("A"."Index" + 1) * 3200) AND "C"."ID" = "B"."BlockID" ORDER BY "A"."BlocksetID", "B"."Index"
> 
> * Dec 11, 2018 9:29 AM: ExecuteReader: SELECT DISTINCT "Hash", "Size" FROM "Block" WHERE "VolumeID" = 97426 took 0:00:00:00.000
> 
> * Dec 11, 2018 9:29 AM: Starting - ExecuteReader: SELECT DISTINCT "Hash", "Size" FROM "Block" WHERE "VolumeID" = 97426
> 
> * Dec 11, 2018 9:29 AM: ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (40, "duplicati-icf794f6b7960408882c9c49b7f365651.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid(); took 0:00:00:00.000
> 
> * Dec 11, 2018 9:29 AM: Starting - ExecuteScalarInt64: INSERT INTO "Remotevolume" ("OperationID", "Name", "Type", "State", "Size", "VerificationCount", "DeleteGraceTime") VALUES (40, "duplicati-icf794f6b7960408882c9c49b7f365651.dindex.zip.aes", "Index", "Temporary", -1, 0, 0); SELECT last_insert_rowid();
> 
> * Dec 11, 2018 9:29 AM: Re-creating missing index file for duplicati-b6026136686564f439025237961c99a48.dblock.zip.aes
> 
> * Dec 11, 2018 9:29 AM: ExecuteReader: SELECT "Name" FROM "RemoteVolume" WHERE "Type" = "Blocks" AND NOT "ID" IN (SELECT "BlockVolumeID" FROM "IndexBlockLink") AND "State" IN ("Uploaded","Verified") took 0:00:00:00.036
> 
> * Dec 11, 2018 9:29 AM: Starting - ExecuteReader: SELECT "Name" FROM "RemoteVolume" WHERE "Type" = "Blocks" AND NOT "ID" IN (SELECT "BlockVolumeID" FROM "IndexBlockLink") AND "State" IN ("Uploaded","Verified")
> 
> * Dec 11, 2018 9:29 AM: ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC took 0:00:00:00.000
> 
> * Dec 11, 2018 9:29 AM: Starting - ExecuteReader: SELECT "ID", "Timestamp" FROM "Fileset" ORDER BY "Timestamp" DESC
> 
> * Dec 11, 2018 9:29 AM: PreBackupVerify took 0:00:14:42.154
> 
> * Dec 11, 2018 9:29 AM: ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting") took 0:00:00:00.013
> 
> * Dec 11, 2018 9:29 AM: Starting - ExecuteReader: SELECT DISTINCT "Name", "State" FROM "Remotevolume" WHERE "Name" IN (SELECT "Name" FROM "Remotevolume" WHERE "State" IN ("Deleted", "Deleting")) AND NOT "State" IN ("Deleted", "Deleting")
> 
> * Dec 11, 2018 9:29 AM: RemoteOperationList took 0:00:14:38.754
> 
> * Dec 11, 2018 9:29 AM: Backend event: List - Completed: (95.15 KB)
#46

I tried again about 24 hours ago but this time I left it alone (went to do other things + sleep). Right now it is back to normal (as in it is backing up the files) but where can I get the logs to see how long the pre-backup verification took?

PS: This seems to me to be more of a UX/UI issue rather than a database issue

#48

This problem is still around and its definitely not a UI issue. Mine just took 2 days to verify data before resuming normal backups.

#49

The cause should be fixed in 2.0.4.17_canary_2019-04-11 although backups will still need to recreate and upload any that got missed, but they should no longer get missed.

#50

If 2.0.4.17 fix refers to “* Fixed an issue where index files were not generated, thanks @seantempleton” then it fixes a bug that got in 2.0.4.16 with new parallel upload code, so that was a rather temporary issue and probably wouldn’t help any older releases. For any who found the issue here to get worse in 2.0.4.16, code here might have been rebuilding the missing index files while still displaying PreBackupVerify phase which is one of two phases that show “Verifying backup data”. It helps when reports describe error timing.

Rebuilding an index file appears to get logged if at Information level, as seen in text from bug report above, or from the earlier post in this topic, which also very clearly says PreBackupVerify so reduces guessing. This “Re-creating missing index file” isn’t new, but not all of the reports here have info needed to identify it. Not every little action gets its own phase, so guesswork may be needed on what code caused an issue…

Does that mean that it’s an intermittent slowdown? I notice most reports don’t say much about frequency.

Hangs while trying to "Re-creating missing index file"
#51

Not sure. Its currently “Deleting unwanted files.” Has been for the last 24 hours or so. But at least log is constantly updated with new files it fetches from server and analyzes whether to delete or not.

I’ll update to 2.0.4.17 and will update this post with whether it’ll go into another 2 day cycle of Verifying backend data.

#53

I am new to Duplicati and was excited to find a decent backup solution after CrashPlan. But soon after completing first round of backups I saw that it stuck on Verifying backend data… and took long time, for some backup-sets it took over 1 day for couple of GB data verification.

After making changes (or adopt new parameters i.e. changing “block-hash-algorithm” or “blocksize”) can’t use existing backup data, so I initiate fresh backup that put me back to square one. Search on google didn’t help, but came to know that many other users having same issue. After carefull monitoring and review logs, my finding regarding the duplicati stuck on Verifying backend data… is:

  1. On destination Duplicati creates 3 types of files
  • dlist.zip.aes* - List of the source files and folders
  • dblock.zip.aes* - Data Blocks
  • dindex.zip.aes* - Indices for quick list and restore.
  1. During initial backup, Duplicati creates dlist.zip.aes and dblock.zip.aes but skip creating dindex.zip.aes files.

  2. 2nd backup, just after initial backup, Duplicati creates Indices from dblock.zip.aes by copy dblock.zip.aes file to source machine, load it than generate index file followed by uploading to destination as dindex.zip.aes file. It is a time-consuming process (not sure why) also it creates total number of index file matched to dblock files.

  3. To verify that Duplicati is creating indices and not hung otherwise, go to ‘About’ => ‘Show Log’ => ‘Live’ => ‘ExplicOnly’
    A message like following should be appear, look for "VolumeID" = # , Where it tells, currently it is working on # dblock file and creating index.

Apr 12, 2019 7:41 PM: Starting - ExecuteReader: SELECT “A”.“Hash”, “C”.“Hash” FROM (SELECT “BlocklistHash”.“BlocksetID”, “Block”.“Hash”, * FROM “BlocklistHash”,“Block” WHERE “BlocklistHash”.“Hash” = “Block”.“Hash” AND “Block”. “VolumeID”=277) A, “BlocksetEntry” B, “Block” C WHERE “B”.“BlocksetID” = “A”.“BlocksetID” AND “B”.“Index” >= (“A”.“Index” * 16384) AND “B”.“Index” < ((“A”.“Index” + 1) * 16384) AND “C”.“ID” = “B”.“BlockID” ORDER BY “A”.“BlocksetID”, “B”.“Index”*

  1. If you need to know how many more indices still needs to create, check destination and count for dblock.zip.aes files. In Duplicati message each dblock.zip.aes file considered as Volume. Also, as it proceeds, new dindex.zip.aes files should appear at destination.

  2. Once all indices created, subsequent backups should be quick, subject to the data changed in between backups.

  3. In my understanding, current Duplicati message is confusing and should be meaning full, something like “Creating Index # of ##…" also a progress bar with current process will even be more help full.

Thanks

1 Like
#54

Hello @RGupta and welcome to the forum!

Are you running 2.0.4.16 canary? That’s the one that inadvertently sometimes missed dindex uploads, thus forcing recreation. It’s not a normal thing. Canary releases are bleeding edge, and sometimes have bugs… Fortunately, those who run it can report them and fixes can be made. 2.0.4.17 should upload all the dindex. While I’m not familiar enough with the details of recovery, your analysis is very nice and seems reasonable.

3 Likes
#55

I’m struggling to figure out how I can see how many more files need to be recreated by the two systems I have that are still verifying for over a week. I tried what @RGupta wrote but the numbers made no sense or I misunderstood the directions.For example, currently one backup has logged:

16 Apr 2019 08:15: Starting - ExecuteReader: SELECT "A"."Hash", "C"."Hash" FROM (SELECT "BlocklistHash"."BlocksetID", "Block"."Hash", * FROM "BlocklistHash","Block" WHERE "BlocklistHash"."Hash" = "Block"."Hash" AND "Block"."VolumeID" = 89642) A, "BlocksetEntry" B, "Block" C WHERE "B"."BlocksetID" = "A"."BlocksetID" AND "B"."Index" >= ("A"."Index" * 3200) AND "B"."Index" < (("A"."Index" + 1) * 3200) AND "C"."ID" = "B"."BlockID" ORDER BY "A"."BlocksetID", "B"."Index"

There are 23,698 dblock.zip.aes files in the destination.

Can anyone help?

#56

Yes, I was on 2.0.4.16. Current release addressed Index issue. Thanks

#58

No changes for me with 2.0.4.17. After upgrading version it has now been “verifying backend data” for over 12 hours.

This does only happen when backup is interrupted and backups seem to complete in a normal amount of time when the verification process finally finishes so I guess I can live with this.

#59

I think that means you should have have that many dindex files when you get them all back together. Hopefully quite a few of those are old ones, so should not have been affected by 2.0.4.16 not uploading some of them…

#61

Couple of things I need to clarify here:

  1. VolumeID is the Index number (will ONLY increase), mapped to actual BlockVolumeID
  2. BlockVolumes are added or removed as data changes over the time period but VolumnID ## will only increase.
  3. Above observation are from the fresh backup where no additional blocks created or removed. So “theoretically” VolumeID are sequentially mapped to BlockVolume and are in order.

If you do not have fresh backup than you can download 'DB Browser for SQLite"

  1. Open "READ ONLY… " sqlite database (to find local database file, expend job list => Database (in Advance) => Local database path )
    If database is locked, on Windows machine use ‘ Ctrl+C ’ and ‘ Ctrl+V ’ in explorer and use copy of database.
  2. Go to ‘ Execute SQL ’ tab and paste following query (Replace ## with the current VolumeID)
    select (select max(rowid)from IndexBlockLink) as "Total Volumes", rowid as "Current Volume" from IndexBlockLink where IndexVolumeID = ## order by IndexVolumeID asc
  3. Click on execute button ‘Blue Triangle’.
2 Likes
#62

Ok, I have file counts for the local backup destinations (SMB) for the problem machines telling me:

Indexes: 25,298 Blocks: 23698
Indexes: 5,717 Blocks: 5,717
Indexes: 12,417 Blocks: 13,651

One of the servers shows as processing a Wasabi destined backup, so I can’t get the counts as easily, but that’s assuming that the GUI is telling me the right thing and it’s indeed processing this backup set as the main log doesn’t show me which job it is.

#63

Very good analysis! m currently experiencing this myself where a backup made on 2.0.4.16 did not upload all the dindex.zip.aes.

Agree it could be clearer but as this is not supposed to happen maybe it is not worth worrying about.

#64

I’m not sure if having excess index files is normal, but I’ve seen times where two index files index the same dblock. Not sure how it happens or if it causes a problem, but it sure throws off the how-many-left theory…