That’s really useful - so our pipeline becomes: POS -> NER -> DEP -> Rules, where rules is a set of logic (rather than using ML) to identify the relevant verbs and their POS (for tensing), and the dependent NEs are extracted? I can see what you mean about the PyData talk - pretty useful in this context!
Happy for you to use our experience if you’d like! Essentially we want to extract career histories from bios (as you might expect) within a recruitment context; the good thing about the project is we start with loads of scraped training data that’s very accurate for NER, but extending the pipeline is trickier and its really helpful to get the advice!