Using transformers_tokenizers.py

Hello!
I wanted to try using tokenizer from bert model with

prodigy bert.ner.manual ner_reddit ./reddit_comments.jsonl --label PERSON,ORG --tokenizer-vocab ./bert-base-uncased-vocab.txt --lowercase --hide-wp-prefix -F transformers_tokenizers.py

But it always gives error

Invalid recipe file path
transformers_tokenizers.py

I added transformers_tokenizers.py to my folder, still the same error.
Please, tell me what I am doing wrong. Thank you!

That's strange. If you execute prodigy ... from within the same directory as where the transformers_tokenizers.py file is stored, the -F transformers_tokenizers.py part should work.

Can you execute this command in the same directory and paste the output?

ls transformers_*

And can you also execute your original command from that directory and paste the full stack trace?

Hi! I was doing

python -m prodigy bert.ner.manual test ./TEST/berlin.txt --label PER,ORG,GPE --tokenizer-vocab  ./TEST/vocab.txt --hide-wp-prefix -F  ./TEST/transformers_tokenizers.py

but when I did it from the folder TEST itself it woked

python -m prodigy bert.ner.manual test berlin.txt --label PER,ORG,GPE --tokenizer-vocab vocab.txt --hide-wp-prefix -F transformers_tokenizers.py

It works! Thank you)