As it says, Prodigy train a model during annotation(labeling the data). That also means prodigy will ask for the examples that it has not seen before and is not sure of. But my use case is a bit different, I want to label the whole data and don’t want the model to train at the backend and decide which examples to show. So is it possible in prodigy to disable the model training while annotation so that I can label whole data?
Sure – recipes like mark (just label whatever comes in), ner.manual (mark spans in a text) or ner.make-gold (correct the model’s predictions) don’t update a model in the loop at all, and stream in the stream as it comes in. You can always write your own recipes for even more custom workflows.
See here for examples of how the recipes are imlemented. Adding an update callback and resorting / filtering the stream based on the model is an optional feature