Hi! To train a model, you'll need at least some examples to evaluate on, otherwise there's no way to show you results. The eval_split
exists for quick experiments if you don't have a dedicated evaluation set and just want to hold back some data – but once you're getting serious about training and evaluating, you probably want to have a separate dataset containing the annotation to evaluate on and pass that in as the --eval-id
, instead of holding back a random portion of the examples.