nlp = spacy.load("en_core_web_md")
stream = add_tokens(nlp, stream)
for eg in stream:
for token in eg["tokens"]:
token["disabled"] = True
Error while validating stream: no first example
This likely means that your stream is empty.This can also mean all the examples
in your stream have been annotated in datasets included in your --exclude recipe
Hi! I don't think this would be related to the
"disabled" setting on the token – maybe your stream is empty because all examples are already annotated in the dataset you're using?
Oh, maybe I forgot set_hashes
Or maybe you just want to use a different dataset? Otherwise you'd end up with duplicates of the same example with different disabled tokens, which might not be what you want?