I’m intrigued by this tool, however, the models I typically develop are more fine-grained such that I’m looking to predict an outcome on more of a Likert scale (1-5). Is there functionality for such a use case? I didn’t see anything especially relevant in the demo.
I’m also interested if there is an example workflow where one individual could put together labels and this could be compared to another individual’s labels, that is, any way to streamline the estimates of inter-rater agreement/reliability (e.g., intraclass coefficients ICC(1), quadratic weighted kappas).
Nice looking tool,