Hi! The default batch size is 10 examples, and each request asks for a single batch – so 110 examples (assuming there are no duplicates and no existing answers in the dataset) would mean that 11 batches were sent out but didn't come back answered.
The underlying problem here is that if you have multiple annotators working on the same data and a batch is sent out, Prodigy has no way of knowing whether it's coming back or not. Maybe someone is still working on it, maybe they're offline – it's not tracking the user in the app. So by default, the server will not re-send a batch to prevent duplicate annotations.
However, the batches are not gone or lost, and Prodigy keeps a very detaild record of the examples and annotations via the hashes it assigns. So if you restart the server, the unannotated examples are added back to the queue. Alternatively, you can also make your stream "infinite" and assume that examples that are not in the dataset yet after the first iteration should be sent out again until all hashes are in the dataset. Here's a code example that shows the idea. This works well for a finite stream that you don't necessarily need to annotate in order.
We'll also be adding a new feature that's a bit more complex and lets you enforce the exact ordering of batches that are sent out – see this thread for a more in-depth discussion. If this setting is enabled, the server would then always respond with the same batch until it has received the answers for it. However, the trade-off is that you can end up with duplicates if two people annotate in the same session (e.g. both accessing the app without a session name appended to the URL). So this way of handling the stream would work best if your annotators are all annotating the same data in their own separatre sessions (overlapping feed) and it's important that examples go out in the exact order they're loaded in.