Leveraging negative examples collected through Prodigy with transformer model (textcat)

Hi,

I have been able to train a very highly accurate text classifier thanks to Prodigy, and spacy-transformers, so thank you for building all the tooling for making this possible.

However, I still have not been able to leverage negative examples during the training ("reject" annotations collected via Prodigy). I saw the script @Ines shared as a Gist earlier this year (in Using transformer models inside prodigy and finetuning) to train transformer models from Prodigy annotations, and it appears that "reject" examples are not leveraged either during training: https://gist.github.com/ines/dd618b5bdc544b4ff49b363e98c6368a#file-prodigy_textcat-py-L296-L311
Is there something inherent to the transformer training model preventing it? Is there a reason behind not being able to pass along something like this?
{"text": "my great text", "cats: {"label1": 0.0, "label2": None}}

Thank you in advance for your help!