whl file is not supported

(prodigy) C:\Users\RadhaSowjanyaGanta\Documents>conda install prodigy-1.10.7-cp36.cp37.cp38-cp36m.cp37m.cp38-win_amd64.whl Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - prodigy-1.10.7-cp36.cp37.cp38-cp36m.cp37m.cp38-win_amd64.whl Current channels: To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page.

pip version - 21.2.1
conda version - 4.10.1
windows 10

Hi! conda install doesn't support installing wheel files directly, so this is why you're seeing this error. The easiest way is to just use pip install instead.

Also see the conda docs for background: Using wheel files with conda — conda-build 3.21.4+11.gb01fc2b2.dirty documentation

(prodigy) C:\Users\RadhaSowjanyaGanta>pip install prodigy-1.10.7-cp36.cp37.cp38-cp36m.cp37m.cp38-win_amd64.whl ERROR: prodigy-1.10.7-cp36.cp37.cp38-cp36m.cp37m.cp38-win_amd64.whl is not a supported wheel on this platform.

Did you check the installation docs here and see whether the file name is correct for your platform? https://prodi.gy/docs/install#install-supported-wheel Also make sure you're running Python 3.6, 3.7 or 3.8. We currently don't provide wheels for 3.9, but this is coming with the next release.

Thank you, Ines. Python 3.9 version is installed initially. The issue is resolved!

1 Like

After my data is trained, i am facing the below error! could you please look into this !

Traceback (most recent call last): File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\site-packages\prodigy_main.py", line 53, in controller = recipe(args, use_plac=True) File "cython_src\prodigy\core.pyx", line 321, in prodigy.core.recipe.recipe_decorator.recipe_proxy File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\site-packages\plac_core.py", line 367, in call cmd, result = parser.consume(arglist) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\site-packages\plac_core.py", line 232, in consume return cmd, self.func((args + varargs + extraopts), **kwargs) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\site-packages\prodigy\recipes\train.py", line 196, in train nlp.to_disk(output_path) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\site-packages\spacy\language.py", line 927, in to_disk util.to_disk(path, serializers, exclude) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\site-packages\spacy\util.py", line 681, in to_disk writer(path / key) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\site-packages\spacy\language.py", line 914, in p, exclude=["vocab"] File "tokenizer.pyx", line 542, in spacy.tokenizer.Tokenizer.to_disk File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\pathlib.py", line 1183, in open opener=self._opener) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\pathlib.py", line 1037, in _opener return self._accessor.open(self, flags, mode) File "C:\Users\RadhaSowjanyaGanta\anaconda3\envs\py36\lib\pathlib.py", line 387, in wrapped return strfunc(str(pathobj), *args) FileNotFoundError: [Errno 2] No such file or directory: 'work_exp_train.jsonl\tokenizer'

If you look at the error message, it shows that it's trying to load a .jsonl file instead of a spaCy model. So double-check that you've provided the correct arguments to the recipe you're using. It looks like you're loading in the source file as the argument that expects the model.

Hey! How can i load the model and test it on raw jsonl / json file without annotations on a raw test file

If you've provided an output directory to prodigy train, you can load in that trained model with spacy and apply it to any text you like:

nlp = spacy.load(your_output_dir)
doc = nlp(any_text)

hey i have annotated two different data files with same lables but used upper case in labels for one of the file like designation in one of the lables and Designation in one of the lables.

Now while i am trying to merge these two models its showing this way:
Label Precision Recall F-Score ------------ --------- ------ ------- designation 77.882 78.125 78.003 company_name 66.607 59.872 63.060 end_date 90.377 87.214 88.767 from_date 85.447 83.333 84.377 Company_Name 65.625 45.652 53.846 Designation 63.366 33.684 43.986

How can i resolve this issue!

One option would be to export your dataset using db-out, which gives you a JSONL file you can edit. You could then do a quick search and replace in your editor to make sure all labels are the same. Once you're done, you can use db-in to export the edited data to a new dataset.