Prodigy provides built-in recipes for many NLP tasks, using models from our NLP library spaCy. You can also implement your own custom recipes, allowing you to use any machine learning logic you like. However, many users have asked for a more hybrid approach, that would let them import a custom model while taking advantage of more of the existing solutions implemented in Prodigy and spaCy. I’m now pleased to provide a quick teaser of what that could look like, as we’ve been making great progress on enhanced PyTorch and spaCy integration. You can read more about our plans here:
️ Please note that the issue is still a draft, and the code within it is for example purposes only — but I hope it gives you an idea of how we expect things to work.
The key part of the integration is two small wrapper classes within Thinc, which translate calls to the model’s prediction, update, saving and loading methods from Thinc’s API to the wrapped PyTorch model. This should make it very easy to use PyTorch models within Prodigy. We hope to provide similar support for TensorFlow and other libraries shortly as well.
We’re particularly keen to start experimenting with PyTorch models for object detection and image classification, as this would allow us to finally finish built-in support for these tasks. There are still a few interesting research questions around this, however. The main issue is that for active learning to work well, we need a model that learns very quickly. For the text classification model, I solved this by making the model an ensemble of a unigram bag-of-words classifier and a convolutional neural network. The CNN alone learns slowly, especially if it’s not initialised with pre-trained vectors. Stacking the CNN with a unigram bag of words gives much better results in the first few iterations. I’m sure we can find a similar trick for an object detection model, but I don’t immediately know what it will be yet. Suggestions welcome!