training of annotated dataset with ner.make-gold

Rejected examples make a difference for the model if you're updating with incomplete annotations, e.g. binary yes/no decisions. It'll help help "narrow in" on the correct analysis, even if some values are missing. I've shared an illustration of this in my comment here:

If you're using a workflow like ner.make-gold and/or exporting the examples for spaCy later on, the general assumption is that your data is "gold standard". So basically, the annotated entities are all entities there are and all tokens that are not labelled are not part of an entity. That's why the recipe will only use the accepted answers: there's nothing to gain from the rejects, because it assumes that the accepted answers are complete.