For binary annotation, you can pass in data with both a "label"
and "spans"
, and the label will be rendered above the card. The manual annotation currently only supports one workflow: manual annotation. The reason the interface is set up this way is that Prodigy is centered around quick and automated annotation workflows that focus on single decisions at a time. Selecting between several different entity types is already challenging, and you'll see much better results if you / the annotators can focus on a narrow range of labels and decisions and don't have to consider two very different objectives. Even just for manual NER annotation, you often want to focus on two or three labels at a time and make several passes.
Annotating both named entities and text categories at the same time often isn't a good idea, so in terms of annotation efficiency, I really wouldn't recommend a workflow tlike that. Instead, it's usually much more efficient to make several passes over the data and focus on one objective at a time. This will also let you run separate experiments for the model components you want to train, try out different approaches for each component etc.
You'll likely want to iterate and try out different label schemes as well and run quick experiments in between to find the approach that works best. You might also want a slightly different example selection for the different tasks (NER and text classification), to achieve the best possible results.
Prodigy currently only ships with the compiled source of the web application. But the html
interface lets you build any custom UI, and even comes with experimental support for JavaScript.