Hello @archon810 and welcome to the forum!
Have you tried filing a ticket with Box to ask them whether the limit applies to paged requests to their API?
What seems quite clear is that their FTP server has had errors at 20,000. Duplicati uses their REST API.
Unable to skip the file on the server, when the there is a file of the same name
Response: 550 Box: Too many items to display. Directory contains 21294 items; limit for directory listing is 20000 items.
List of FTP server return codes
550 Requested action not taken. File unavailable (e.g., file not found, no access).
Max files per folder was an all-APIs question that (in 2016) said to file a ticket with scaling need, for advice. There’s also another 2018 comment on the 20,000 limit for FTP (but above 550
proves it’s a server error).
How many files in a folder said “there is not an enforced limit” then worries about performance scaling up.
Which is exactly what I’m going to do. Duplicati seems (based on forum reports) to have practical issues scaling up, well before theoretical limits are reached. Choosing sizes in Duplicati explains some tradeoffs.
The default deduplication hash –blocksize of 100KB is probably too low. 8TB would track 80 million blocks, probably resulting in a large SQLite database, slow queries, and pain on Repair and Recreate (test those).
Can you split that 8TB into smaller backups? That would also avoid possibly super-slow disaster recovery.
Do you need all versions all the time, or do things grow less important eventually? Retention policy can thin versions out if that’s suitable, and this will remove the dlist file for the version. The dblock will eventually get empty enough that compact will repack what’s left of it into a new dblock (along with other dblock leftovers).
The COMPACT command shows some of the available tuning options. Some are touchy at extreme levels.