Howdy, and please accept my apologies for bumbling about with naive questions. I'm planning my first NLP project and am looking for a tool to make annotating my large collection of text samples easier.
From what I can tell so far, Prodigy seems like a great tool for this but given the cost, I figured I'd verify my understanding of how it fits into the toolchain before committing.
I'm planning to do some NLP work for Apple OS apps, a bit of NER and classification. My current limited understanding is that CoreML provides some existing models which may be of use, but more likely, I'll want to supply my own models from other sources, probably not spaCy based on my googling, but maybe Tensor Flow. I am NOT asking if I can create those models with Prodigy, but rather whether I can use it in an agnostic way to generate annotated data that I can feed into whatever tool I end up using to train models. From reading the docs, it seems like exporting annotated data is generally the whole point, and the connection to spaCy is side benefit rather than a locked in workflow. So, is my understanding correct?
Thanks for not mocking me too hard for asking what I assume in six months will feel like a very obvious question .