Stream Reset error handling

Hi!

I'm trying to implement stream reset functionality similar to stream-reset using prodigy-lunr.

'm encountering issues when handling errors during the reset process.

Failures typically occur due to a ParsingException (invalid request) or EmptyStream (no results).
The current implementation returns an unhandled exception that results in a 500 Internal Server Error. Additionally, the response body of this 500 is not JSON parseable as the body is "Internal Server Error". It leads to further errors in the event handler, which I cannot catch or manage.

The default plugin behavior retains the loading icon, causing confusion.
How can I improve error handling to return meaningful feedback without triggering these issues?

Hi @Arnault,

Apologies for the delayed response!
The main problem here imo is that the event handler is not doing any checks on the reset stream.
The two main problems that can occur is the new stream being empty (because the query did not return any results) or the task structure in the new stream is invalid.
The 'reset_streamcore function only overwrites thestreamattribute on theController` object. And Controller/Stream doesn't know about the required schema for the task. This will change in Prodigy v2 where the Stream will become structured. Right know you assume the structure of the task is inline with what front-end expects. Otherwise, the front-end will fail with a parsing error.

For the first case, i.e. empty stream, I think it should the job of the event handler to validate the output (empty stream is a valid data structure per se and it's up to the consumer to decide if it's an exception).
I've added a check against the empty string to the event handler here. You could consider adding something similar to your custom version.
It now raises an HTTP Exception that is captured by the plugin's JS, which triggers an alert pop up to prompt the user to try a different query.
The second possibility i.e. the malformed task data structure is a bit harder to get as it is not validated until it gets to the front-end. So right now it fails with the Console error and parsing error in the UI (it doesn't get into the infinite loop but you should consult the Console to see the details of the error).
As I mentioned, we are planning on improving data validation in the backend via "structured stream" where the the tasks will be structured data classes rather than dictionaries including the possibility of defining custom task types of course.