✨ Demo: fully manual NER annotation interface

Thanks! I’m still working on implementing it, so there’s nothing to test yet – but I’ll try my best to get it finished for the upcoming release. (Not sure if we’re ready for some sort of prodigy-nightly beta tester program just yet – but we might consider it once we have a larger user base!)

In the meantime, you can already achieve something similar using the boundaries interface and ner.marksee here for details. It currently only works for one entity and one span per task, though.

Hi Ines!

Thanks for the quick reply and the info. I’m currently using ner.mark but I’m running into some issues where the sentences are being split in the middle of entities.
e.g. Sentence: The sentence contains the ENTIRE ENTITY with some filler at the end.
is split into:
The sentence contains the ENTIRE
ENTITY with some filler at the end.

Do you have any tips for how I can tweak the model to give me the entire sentence instead of splitting over punctuation and other triggers? This data is from the web so it is a bit messy but I can skip the bad cases.

Also, I’m assuming I will create a new problem where I potentially have two entities within the same sentence. Can I still label multiple entities in the same annotation task in the boundaries interface?

I’m really excited about the new interface that you’re working on. It will make this process so much simpler.

Yes, what you describe is one of the main problems with the boundaries interface at the moment. We’ve been going back and forth on this, and it’s been difficult to find the right balance of trade-offs in terms of efficiency, user experience, annotation speed and so on.

If you look at the source of the mark function in prodigy/recipes/ner.py, you can adjust the token slice here by using a different length or smaller spans to create overlaps between them:

for i in range(0, len(doc), 9):  # document slice
    span = doc[i:i+9]  # focused, annotatable tokens within the slice

In theory, the interface can support any number of tokens – and up to 30 if you want to use keyboard shortcuts (shift+num for tens and shift+alt+num for twenties – e.g. shift+5 for 15).

You can also remove the split_sentences(nlp, stream) pre-processor to disable splitting incoming texts into sentences. This means that the texts will be shown as they come in and you might need to do some pre-processing yourself to make them easier to work with or annotate. But it also gives you more control over how this is done.

Good news – successfully integrated the new interface into the web app last night, and it’s working pretty well so far. Still needs testing and adjustments, but it looks like we’re definitely on track for shipping it with the next release (possibly as an experimental new feature, but it still means that you’ll get to try it out) :tada:

When is the next release?

Not asking you for a promise here, I’m just eager :grinning:, and a I imagine I’m going to want to use this feature.

We’ll start working on getting everything ready next week. @honnibal is still travelling, and it’s been important to us not to push any rushed updates, especially not over the holidays. But I’m definitely looking forward to getting the new features out to the community to people can start testing and using them – this is always one of my favourite parts of software development :smiley:

1 Like

Amazing highlighting interface! The NER active learning have been a little wobbly for us when training from scratch. This may get us started on the right track.

This may mess up the indices, but it would be great if you can highlight between tokens (like detecting missing words). Though I’m thinking highlighting the two tokens surrounding the missing word may suffice.

Thanks for the great work as always.

2 Likes

Thanks!

Actually, I think your idea of highlighting the two tokens is probably better – even if the interface did support highlighting between words. “Highlight the two tokens around the missing word” – that’s a great, straightforward annotation prompt and it’s probably quite fast, because it requires less clicking precision. The user just needs to hit somewhere within the two surrounding tokens.

This is always something I’ve found frustrating about click-and-drag interfaces – the user needs to click very precisely, and that wastes a lot of energy and attention. So I really like the token boundaries solution we’ve come up with here, and it also fits well to the Prodigy philosophy– i.e. let the machine do as much as possible. We could still offer a character-based mode that users can toggle – but I think in most cases, it’s probably more efficient to just add one or two custom tokenization rules if you need different boundaries (instead of spending ten seconds more on every annotation decision).

1 Like

It’s actually pretty similar to what we were building w/ tokenized highlighting (except a lot nicer). Nice that double clicking also highlights a single token.

One thing that could be tricky is punctuations. What happens to punctuations if the text was “elit.”, would it highlight “elit” or “elit.”?

Thanks.

Yeah, the double-clicking is actually the browser’s native behaviour – I hadn’t really thought about this before I started developing the interface. I also never realised that different browsers handle this differently, so it needed a few small hacks to (hopefully) make it work consistently.

If the tokenizer splits off the ., it will be rendered as a single token and will only be highlighted if you select it. (This is the only noticable, visual difference here – punctuation, contractions etc. are separated by whitespace. But it also makes it more obvious that they are a token.)

So if you’re annotating a lot of punctuation, this might still be a little fiddly… In this case, you might also want to add a few more tokenization rules to force stricter splitting and ensure don’t end up with punctuation attached to a token. But as I said, the idea here is that writing one or two regular expressions will still be more efficient than pixel-perfect selection.

It would be great if instead of dropdown, you could introduce right click menu after selection of word(s) and then select to annotate from the list menu.

Also question, how to do you make ner.match not ask the same question twice and tag as many as available in the text?

Great work - I’m loving it.

Thanks! At the moment, the interface assumes that you select the label first and then highlight the span. This has several advantages for the UI:

  • The entity span can be “locked in” immediately after highlighting the text and without requiring any additional user action. So if you’re annotating several entities of the same label in a row, you’ll also have to select the label once.
  • The UI can use the browser’s native behaviour and functionality for both highlighting, label selection etc. This reduces complexity and makes it easier to ensure cross-compatibility. For example, we won’t have to re-engineer how the browser handles selecting text – this is already built-in, and the native Selection API does the rest.
  • Since the labels dropdown has a tabindex, so you can tab back into it after adding an entity. Selecting an entity still requires clicking, but you’ll be able to do everything else using your keyboard as well, if you prefer. So a workflow could look like this: TABP (selects “PERSON”) → highlight entity → highlight another entity → TABO (selects “ORG”) → highlight entity → etc.
1 Like

Would be an awesome addition! Can’t wait to try out.

1 Like

Just released v1.2.0 that includes the ner_manual interface and an ner.manual recipe :tada: (Note that this replaces the previous ner.mark recipe and "boundaries" interface).

You should have received an email about the update and we’ve reset the download limit. You can now also try the new interface in the live demo: https://prodi.gy/demo?view_id=ner_manual

2 Likes

Congrats :tada:

One feedback. Maybe it could show all of the categories as buttons so we can see the labels at all times (less clicks and better labels).

Thanks! :tada:

This is a good idea – at least, it could be an option that users could toggle. For long texts with many labels, a choice-like list could easily get a little messy. But if you only have a few labels, this is definitely nicer. Even better if we also add the keyboard shortcuts. Will definitely try it out and keep you updated!

Edit: The only issue here is that in order to keep the highlighting flow smooth, the labels should be selected before the span. (Changing a span retroactively is difficult, because it means there needs to be a way to select an already added span, a different highlighting style for selected entities etc.) But a lot of this comes down to visual presentation – so maybe we should just display the labels as buttons within the annotation card heading… (Sorry, mostly thinking aloud here :wink: )

1 Like

Quick preview of alternative label style – still need to add keyboard shortcuts. It currently checks whether a "ner_manual_label_style" option is set (either "dropdown" or "list") and sets the label style based on that. If not, a list is used for label sets of 8 or less, and a dropdown for larger sets.

10

1 Like

Congratulations, @ines! Prodigy is exactly what my training workflow has been missing. Now I’m able to better leverage the expertise of non-technical stakeholders.

Please allow me to echo @imranarshad in requesting your further investigation of a right-click menu following text selection. I feel it’s more natural for users, plus, I had the opportunity to witness an excellent example in Palantir’s context menu. They make it available via their Blueprint framework.

I’m happy to contribute in any way as I have considerable front-end experience w/ React and again thank you for such a wonderful annotation tool!

@lukateake Thanks a lot!

The main problem I currently see with a context menu is that it hijacks the browser’s native behaviour (something I think should ideally be avoided if possible) with very little benefits for the actual user experience. After all, the main purpose of the interface is to be as fast and intuitive as possible to navigate.

I also think there’s a big advantage in making all available user actions visible at first glance and not hiding them behind other interactions. This is a pretty consistent UX pattern across all of Prodigy’s interfaces, and it also ensures both click-based and keyboard-enhanced workflows, depending on the user’s preference. For example, in my preview above, an annotation sequence could look like: 1 → highlight → 3 → highlight → A (accept) (I haven’t added the key indicators yet, but it’d be similar to the choice interface.)

Adding another click-based interaction to the labelled entity spans would also introduce several other problems: we’d have to break with the simple workflow of immediately locking in the spans. Instead, the UI would have to “wait” for the user to set the label. Deleting spans would also be more difficult. Having a significant action on both the left and right click is pretty problematic – especially if it’s deleting (!) the span vs. setting a label.

I’m trying to make it work with my own recipe (no luck so far) but is there any way to use manual training with already suggested annotations via patterns? It would save a lot of time.

1 Like