✨ Typos and inconsistencies master thread

docs
to-be-released

(Ines Montani) #1

We’re trying our best to keep Prodigy typo-free, but they still creep in sometimes. So I thought I’d create a master thread to make it easier to report small typos and inconsistencies in the documentation, website, commands, logging etc.

Thanks to everyone who’s been taking the time to report minor issues like this so far. Don’t ever feel bad or pedantic about reporting small typos btw – we love attention to detail! :pray:


(W.P. McNeill) #2

There is a typo in the web documentation for ner.eval. You write “Evaluate a n NER model”. There is a space between the “a” and “n” in “an”.


(W.P. McNeill) #3

ner.py:line 146

log("RECIPE: Moaded model {}".format(spacy_model))

Should be “Loaded”.


(W.P. McNeill) #4

ner.py line 106 reads

log("RECIPE: Created PatternMatcher and loded in patterns", patterns)

“loded” should be “loaded”.


(Kevin Lyons) #5

When I try to include a TextClassifier in a custom recipe my interpreter barfs:

import spacy
from prodigy.models.ner import EntityRecognizer
from prodigy.models.textcat import TextClassifier

nlp = spacy.load('en_core_web_lg')
ner_model = EntityRecognizer(nlp, label=['TACO', 'BURRITO'])  # works
cat_model = TextClassifier(nlp, label=['TACO'])  # fails 

The error message is:

cython_src/prodigy/models/textcat.pyx in prodigy.models.textcat.TextClassifier.__init__()
TypeError: __init__() takes at least 3 positional arguments (2 given)

My environment has prodigy==1.1.0 and spacy==2.0.5.

Mostly unrelated (and trivial) - I think in PRODIGY_README.html#models there is a typo in the EntityRecognizer section where prodigy.components.ner.EntityRecognizer should be prodigy.models.ner.EntityRecognizer.


(Ines Montani) #6

Thanks – I think this might be a typo in the docs examples as well. Try setting the labels as the second positional argument, e.g.:

cat_model = TextClassifier(nlp, ['TACO'])

Will adjust this in the docs and also fix the other typo you mentioned, thanks! :+1:


(Kevin Lyons) #7

That worked, thanks!


(Chandru) #10

Hi,

There is a typo in the example on https://prodi.gy/docs/workflow-named-entity-recognition

spacy.dispacy.serve(doc, style='ent')

should be

spacy.displacy.serve(doc, style='ent')


(Ines Montani) #11

@cmtru Thanks a lot! Will fix this and move this over to the typos thread :+1:


(Hugo Duncan) #12

In the Postgres section of the doc, it uses MySQL:

The settings for a MySQL database can take any psycopg2 connection parameters.


(Ines Montani) #13

@hugoduncan Thanks a lot – I guess that was a copy-paste mistake. Fixing! :+1:


(W.P. McNeill) #14

Prodigy 1.4.0. In the recipe_args dictionary in core.py at line 594 there is the following definition:

'whole_text': (
    'Make accept/reject refer to whole text whole text (not single span)',
    'flag',
    'W',
    '<value is a self-reference, replaced by this string>',
),

“whole text” is repeated.


(W.P. McNeill) #15

The “a n” for “an” typo for ner.eval is in the help documentation as well as the web documentation.

$ pgy ner.eval --help
usage: prodigy ner.eval [-h] [-a None] [-lo None] [-l None] [-e None] [-W]
                        [-U]
                        dataset model [source]

    Evaluate a n NER model and build an evaluation set from a stream.

Prodigy 1.4.0.


(Nathaniel Lane) #16

Insanely small. Some of the documentation is missing arguments. For example, the section in the documentation for db-out (PRODIGY_README.html#manage-db-out) is missing the –answer/-a description.

I realized it accepted this argument when I looked at the command line help section:

➜ prodigy db-out -h
usage: prodigy db-out [-h] [-a None] [-F] [-D] set_id [out_dir]

    Export annotations from the database. Files will be exported in
    Prodigy's JSONL format.


positional arguments:
  set_id                Dataset ID
  out_dir               Path to output directory

optional arguments:
  -h, --help            show this help message and exit
  -a None, --answer None
                        Only export annotations with this answer
  -F, --flagged-only    Only export flagged annotations
  -D, --dry             Perform a dry run