No task available with custom recipe for text classification

I am using a custom recipe for multi-label text classification.
But I am getting issue "No tasks available" after few annotations.
Only works if I restart the instance server.
Using prodigy version: 1.10.8

fetching data from input json file.

PRODIGY_LOGGING=basic  prodigy article_cat articles5 articles_part_5.jsonl -F

similar to :

Could you please check my recipe?

import prodigy
from prodigy.components.loaders import JSONL

dataset=("The dataset to save to", "positional", None, str),
file_path=("Path to texts", "positional", None, str),
def article_cat(dataset, file_path):
"""Annotate the sentiment of texts using different mood options."""
stream = JSONL(file_path) # load in the JSONL file
stream = add_options(stream) # add options to each task
blocks = [
{"view_id": "html"},
{"view_id": "text"},
{"view_id": "choice", "text": None, "html": None}
return {
"dataset": dataset, # save annotations in this dataset
"view_id": "blocks", # set the view_id to "blocks"
"stream": list(stream),
"config": {
"blocks": blocks, # add the blocks to the config

def add_options(stream):
#Helper function to add options to every task in a stream
options = [
{"id": "1", "text": "A"},
{"id": "2", "text": "B"},
{"id": "3", "text": "C"},
{"id": "4", "text": "D"}
#I few more labels

for task in stream:
task["options"] = options
yield task

Hi! Is there anything in your prodigy.json and/or are you using multi-user sessions?

I am not using multi-user.
however, running 5 different prodigy instances with the same script and different input data files( output datasets as well), on same machine.
Here is my prodigy json.

"theme": "basic",
"buttons": ["accept","undo"],
"custom_theme": {"largeText":18,"mediumText":16,"smallText":14,"cardMinWidth":300,"cardMaxWidth":1400,"cardMinHeight":200,"buttonSize":50,"relationHeight":130,"relationHeightWrap":40},
"batch_size": 8,
"history_size": 8,
"host": "*",
"cors": true,
"db": "sqlite",
"db_settings": {},
"api_keys": {},
"validate": true,
"auto_exclude_current": true,
"instant_submit": false,
"feed_overlap": false,
"ui_lang": "en",
"project_info": ["dataset", "session", "lang", "recipe_name", "view_id", "label"],
"show_stats": true,
"hide_meta": false,
"instructions": "instructions.html",
"swipe": false,
"swipe_gestures": { "left": "accept", "right": "reject" },
"split_sents_threshold": false,
"global_css": ".prodigy-content { text-align: left }",
"javascript": null,
"writing_dir": "ltr",
"show_whitespace": false,
"choice_style": "multiple",
"auto_count_stream": true,
"total_examples_target": 2233,
"show_flag": true

This all looks reasonable! Just one quick comment: the auto_count_stream and total_examples_target settings were both only introduced in v1.11, so they won't have any effect in v1.10. So if you want to use them, you should upgrade to v1.11 – if you can, this would be interesting to try in a separate environment to see if it solves the problem you're seeing.

I've tried out your recipe with the same settings and some random data file and I can't seem to reproduce the problem :thinking: Some things to check on your end:

  • What's in the input JSONL files? Do they contain duplicates? How many examples are in them? Do you see "No tasks available" at the beginning of the file or do you actually hit the end? (Maybe you want to set "force_stream_order": true so that refreshing the browser doesn't request the next batch? This only makes sense if you only have one user per instance, though.)
  • Since you're running multiple instances, do you have enough memory?

No this is first time i posted here..
Many thanks.. "force_stream_order": true solved the problem.. Could you please add this to documentation i noticed that it's not mentioned there

Hi Ines,

I tried adding "force_stream_order": true to my prodigy.json.
However, the same error: "No task available" comes after few annotations.

Now this is 4th time I had to restart the instance after a complaint from clients.
Could you please help on this?


Even after restarting the instance; I got the same issue now.

Can you share some more details on the data you're using? How large is your data file and did you confirm that it still includes examples that are not yet present in your dataset? Does you data contain duplicates that could be excluded?