how do i use prodigy with the custom html interface like https://prodi.gy/docs/custom-interfaces#html? if i specify the json task format file, how do i call it for my annotation task with python -m audio.manual etc?
Hi! If you want to use a fully custom interface, you typically also want to use a custom recipe that lets you orchestrate how the data is created, how it's rendered and how the settings are configured. See the custom recipes docs for examples: https://prodi.gy/docs/custom-recipes The main thing that's important is that the JSON tasks you create in your recipe matches the format expected by the interface you use.
Using the recipe name and optional arguments, you can then call the recipe on the command line and add the
-F flag to point Prodigy to the Python file containing your code.
Thank you very much. The reason I'm looking for custom interface is I want to annotate a video by clicking a button when an event occurs, and recording that time. For example, if someone yawns, I want to click a button at the start of the yawning. Is there a way to build the functionality using audio.manual?
If you load in video files with
--loader video, you could also annotate this as a label
YAWN in the waveform? Then you get the timestamps out-of-the-box. If you only care about the start of the yawn, you can just ignore the end of the segment.
Alternatively, you could probably also implement that in a custom recipe with just the
- Select the
<video>element and get the current timestamp (should be pretty straightforward using the
window.prodigy.updateand add that timestamp to the current JSON task. For example, you could have a list
"yawns"and add an entry to that every time you click a button for