Sorry for the many questions - I hope I don't waste too much of your time. I couldn't agree more with this users comment the other day.
For the sake of simplicity let's say i want to make an entity SEK_AMOUNT, e.g. 10 from the expression SEK 10. I'd like to teach a NER model to capture this - this is a toy example.
Using patterns I can easily capture SEK 10 with a few false positives whereas 10 is not possible without getting a lot of false positives. How do you propose to proceed?
In a real world example I have a custom component that uses the EntityRuler. It creates entities but with false positives. However it is a good starting point to collect entities from scratch by using the existing entities plus some logic around those, but the entities from my component should NOT be saved as entities. Should I write my own recipe for this? Probably something close to ner.match?
Thank you.
Off-topic: when do you announce spaCyIRL for 2020? Hopefully you'll continue the great success from this year!
I think a custom recipe would work well for your problem. You could just recognise SEK 10 and then trim the entity down with a rule afterwards. Alternatively you could just have the classification technology recognise the whole phrase, and then only use the numeric part in your application?
I think using the patterns to train a model, probably with a custom recipe so you have better control and can customise things, seems like a good approach.
def add_metric_amount(stream):
for task in stream:
spans = [
{"label": "M_AMT", "start": e.start_char, "end": e.end_char}
for e in nlp(task["text"]).ents
if e.label_ in (Entity.AmountRange.label, Entity.Amount.label)
]
log(f"Seeing {len(spans)} spans")
for span in spans:
task["spans"] = [span]
yield task
@recipe(
"ner.custom-match",
dataset=recipe_args["dataset"],
source=recipe_args["source"],
api=recipe_args["api"],
loader=recipe_args["loader"],
exclude=recipe_args["exclude"],
resume=(
"Resume from existing dataset and update matcher accordingly",
"flag",
"R",
bool,
),
)
def custom_match(
dataset, source=None, api=None, loader=None, exclude=None, resume=False,
):
log("RECIPE: Starting recipe ner.match", locals())
DB = connect()
stream = get_stream(
source, api=api, loader=loader, rehash=True, dedup=True, input_key="text"
)
return {
"view_id": "ner",
"dataset": dataset,
"stream": add_metric_amount(stream),
"exclude": exclude,
}
It works fine BUT it seems that it only yields a task for the first matched span and not a task for each matched span. And I'm running out of task although there should be tens of thousands.
I did around 200 annotations and got a model with 40% accuracy (just wanted to sanity check the model) using en_vectors_web_lg. Then I tried ner.teach with the new model but it suggested almost every token as an entity which puzzles me. Then I just rejected a whole lot so I have 265 accepted and 1176 rejected in total. Now when I try to ner.batch-train again I get the following error
ValueError: [E103] Trying to set conflicting doc.ents: '(166, 167, '!M_AMT')' and '(154, 167, '!M_AMT')'. A token can only be part of one entity, so make sure the entities you're setting don't overlap.
I'm guessing it has to do with the binary annotations task (one task per matched span instead of one task per document)? The initial batch train output puzzles me as well
Loaded model en_vectors_web_lg
Using 50% of accept/reject examples (210) for evaluation
Using 100% of remaining examples (305) for training
Dropout: 0.2 Batch size: 10 Iterations: 10
BEFORE 0.000
Correct 0
Incorrect 24
Entities 0
Unknown 0
Are you using spaCy v2.2? The handling of binary annotations is currently the only incompatibility with the existing version of Prodigy – we'll be resolving that in the upcoming Prodigy v1.9.
To me there is a mismatch in those outputs, the numbers just doesn't add up. And the accuracy is incredibly bad due to false positives (i.e. almost all tokens are marked as an entity). Any thoughts?
The binary annotations work best for improving an existing model. If you're starting from scratch, often the model struggles to refine the definition of the task, given the weak supervision. So that might be what's happening here. You could try using the --no-missing flag, which declares that any entities that aren't present are incorrect. If your annotations don't have many missing entities, this would probably work quite well.