I've trained a model with gold-std data and now I want to improve it by using ner.teach.
The problem is that prodigy seems to show score 1.0 for almost all it's suggestions and the vast majority of it's suggestions are actually correct. It's not showing me a whole lot of the entities that I actually need to improve the score for either.
Should I run ner.teach only with the labels that I want to improve?
Another thing is that the % wen'up to 90% really fast and now, at 95%, the model doesn't seem to improve anymore.
I'm assuming that's normal?
I've also noticed that the model is making mistakes in this recipe, that it isn't making with ner.correct. I could be wrong about this but maybe it's the tokenization? I do have a custom tokenizer but that should be part of the model I'm using.
Also the UI shows me that the lang is en when it should be de (it is in my train.cfg), but I don't see a way to overwrite that.
It's worth noting that I'm also setting --unsegmented. My model doesn't have a sentencizer and whatever the recipe was doing, it mas messing up my samples big time
Any suggestions would be much appreciated.