Server side implementation (for ssh backends) - for cli delete&compact - when using low bandwidth links

Ok, I think I understand the process now. But how does it know which blocks are there? If you have 1 million blocks, listing them with a pagesize of max 1000, is still 1000 HTTP requests. Even with a 1 sec pr. request that is more than 15 minutes. And since it supports multiple writers it pretty much needs to do this on every backup?

I haven’t checked the code, but I’d call each file with its hash - then you just need to check if it exist at destination to make a decision to upload it or not and I do not need entire list ahead of time.

EDIT:
Here is the quote from duplicay design document:

Store each chunk in the storage using a file name derived from its hash, and rely on the file system API to manage chunks without using a centralized indexing database