Can't find recipe or command 'pyannote.scd.binary'.

Speaker segmentation and speaker change detection

Getting the following error when trying speaker segmentation and speaker change detection.

Error I am coming across is as under.

Command

prodigy pyannote.scd.binary speaker_change ./faarecordings/ATC

Error

✘ Can't find recipe or command 'pyannote.scd.binary'.
Run prodigy --help to see available options. If you're using a custom recipe,
provide the path to the Python file using the -F argument.

Has anyone come across this issue? Any suggested resolution?
Please let know asap.

Cheers!!

Hi @nmarker!

Thanks for your question. Did you install pyannote.audio?

The current version of pyannote.audio ships with the built-in Prodigy pyannote recipes, so if you have both packages installed, Prodigy will able to detect the recipes automatically. You have to install pyannote.audio from GitHub to be able to use the recipes.

Let me know if you tried this or continue to have issues. Thank you!

Yes have installed pyaanote.audio

pip3 install pyannote.audio==1.1.1

Still having the same issue.

(pyannote) nmarker@#### prodigy % pip3 install pyannote.audio==1.1.1
Requirement already satisfied: pyannote.audio==1.1.1 in ...

(pyannote) nmarker@#### prodigy % prodigy pyannote.scd.binary speaker_change ./faarecordings/ATC

✘ Can't find recipe or command 'pyannote.scd.binary'.
Run prodigy --help to see available options. If you're using a custom recipe,
provide the path to the Python file using the -F argument.

(pyannote) nmarker@#### prodigy %

Thanks.

Thanks for the quick response!

Just curious, can you run?

python -m prodigy pyannote.sad.manual --help

From what you provided I suspect it will work. This will at least confirm that your installation of prodigy + pyannote worked.

Looking more at pyannote.audio repo, I can't find the pyannote.scd.binary recipe. This would explain the problem but it's not clear what happened to that recipe. Unfortunately, I'll need some time to look into this to see if can find out what happened to that recipe.

I assume you found the pyannote.scd.binary recipe from our Prodigy documentation, is that correct?

Following is the output

(pyannote) nmarker@###### prodigy % python -m prodigy pyannote.sad.manual --help
usage: prodigy pyannote.sad.manual [-h] [-chunk 10.0] [-speed 1.0] dataset source

positional arguments:
dataset Dataset to save annotations to
source Directory containing audio files to annotate

optional arguments:
-h, --help show this help message and exit
-chunk 10.0 split long audio files into shorter chunks of that many seconds each
-speed 1.0 set the playback rate (0.5 means half the normal speed, 2 means double speed and so on)

Yes found the pyannote.scd.binary reference from your documentation

Speaker segmentation and speaker change detection

Thanks.

Reinstalled my environment. Getting a different error.

prodigy % prodigy pyannote.scd.binary speaker_change ./faarecordings/ATC
Traceback (most recent call last):
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/prodigy/__main__.py", line 51, in <module>
    registry.recipes.get_entry_points()
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/catalogue/__init__.py", line 124, in get_entry_points
    result[entry_point.name] = entry_point.load()
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/importlib/metadata.py", line 77, in load
    module = import_module(match.group('module'))
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pyannote/audio/__init__.py", line 29, in <module>
    from .core.inference import Inference
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pyannote/audio/core/inference.py", line 31, in <module>
    from pytorch_lightning.utilities.memory import is_oom_error
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pytorch_lightning/__init__.py", line 30, in <module>
    from pytorch_lightning.callbacks import Callback  # noqa: E402
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pytorch_lightning/callbacks/__init__.py", line 26, in <module>
    from pytorch_lightning.callbacks.pruning import ModelPruning
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pytorch_lightning/callbacks/pruning.py", line 31, in <module>
    from pytorch_lightning.core.lightning import LightningModule
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pytorch_lightning/core/__init__.py", line 16, in <module>
    from pytorch_lightning.core.lightning import LightningModule
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 40, in <module>
    from pytorch_lightning.loggers import LightningLoggerBase, LoggerCollection
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pytorch_lightning/loggers/__init__.py", line 18, in <module>
    from pytorch_lightning.loggers.tensorboard import TensorBoardLogger
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 26, in <module>
    from torch.utils.tensorboard import SummaryWriter
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py", line 10, in <module>
    from .writer import FileWriter, SummaryWriter  # noqa: F401
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 9, in <module>
    from tensorboard.compat.proto.event_pb2 import SessionLog
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/tensorboard/compat/proto/event_pb2.py", line 17, in <module>
    from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/tensorboard/compat/proto/summary_pb2.py", line 17, in <module>
    from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/tensorboard/compat/proto/tensor_pb2.py", line 16, in <module>
    from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/tensorboard/compat/proto/resource_handle_pb2.py", line 16, in <module>
    from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/tensorboard/compat/proto/tensor_shape_pb2.py", line 36, in <module>
    _descriptor.FieldDescriptor(
  File "/Users/nmarker/miniconda3/envs/pyannote_new/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 560, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.

If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

After a deep dive review, we couldn't locate the pyannote.scd.binary recipe on pyannote. We have put in a pull request to remove this example from the Prodigy website. We apologize for the confusion and have noted this case in case we're able to update this recipe again in the future.