I already mentioned this briefly in the call for image UI beta testers: the upcoming Prodigy v1.10 will feature a completely new interface and a suite of recipes for efficient relation and dependency annotation We're especially excited about this one, since it's been one of the missing features in Prodigy and it took a while to develop efficient solutions for the various challenges involved.
- manually annotate labelled dependencies and relationships between words and phrases
- join entity and relation annotation: merge spans and assign directional relations to them in one interface
- built-in recipes for dependency parsing, coreference resolution and fully custom relations
- define custom workflows and automate as much as possible: use match patterns or a pretrained model to highlight spans or put a model in the loop to suggest relations
- focus on what matters and ensure data quality and consistency: and use match patterns to disable tokens to exclude
- keyboard shortcuts and support for touch devices
Examples & screenshots
Beta testing and requirements
If you want to help beta test the new feature and try them on your data, feel free to send me an email at firstname.lastname@example.org or a DM on Twitter . Requirements are:
- Current Prodigy user on v1.9 (just include your order ID starting with #EX... when you get in touch)
- Should have an immediate project you can test it on and time to test it this week.
If you have any questions, I'm also happy to answer them in this thread! If you have a biomedical use case, @SofieVL is also happy to help you get set up with the new relations workflows