what to do if train-curve shows slight decrease in last sample

I'm creating a new dataset, and so far I made about 500 annotations. I ran the train-curve command, and the score decreased from 0.3 to 0.29 in the last sample. What should I do at this point to make sure I don't annotate a dataset that won't work? Are there some strategies like going back to make sure that the annotation was more consistent, or troubleshoot and find a root cause if possible? Should I just keep annotating and hope that it improves?

Thank you!

Hi!

It's always difficult to provide generic advice for these kind of ML questions, as a lot depends on the data (size) and how the accuracy table actually looks like in detail.

In general, a 1-percentage-point drop isn't necessarily something to worry about, but it depends on the larger trend. It might be that you've come to a sort of a "plateau", where your performance might not necessarily improve anymore if you continue annotating.

If you're suspecting some kind of inconsistency in your later annotations, you might also consider using the review recipe to double-check that and enforce consistency.

But you could also just annotate another 50/100 and see what happens first, to determine whether the trend continues, or whether the 1pp drop was just a "fluke"?

1 Like

If you do notice that you're stuck around 30% with your challenge, perhaps it's worth reconsidering the annotation guidelines for your approach? I don't know any details, so it's difficult to give any generic advice though.

1 Like

thank you. I did a bit more annotations as you suggested and it is working better now!

Happy to hear it!