I’d like to say a word of appreciation for how nice the UI is across the board. Has @ines written anything anywhere about her front-end development workflow or technology stack? I’m curious if she uses frameworks, and if so which ones.
Thank you so much, that’s nice to hear! I haven’t really written about this ins detail before, so I’ll just do it here:
The Prodigy app is a pretty straightforward one-page React app. It uses Redux for state management (although it likely would have been possible without it), and communicates with the Prodigy server over a REST API.
Aside from basic React and Redux-related packages, the web app has barely any runtime dependencies. This lets us do something that’s probably considered an antipattern most of the time: draw everything into one bundle. The JavaScript bundle is still only around 600 KB, and is shipped with the Python package. The annotation interfaces like ner
, ner_manual
, image
and so on are developed as React components, which all receive the same data as props. I implemented all annotation views from scratch, which meant that I was able to keep them lightweight, only focus on the features we need, and keep the styling and experience consistent.
Because it uses backwards-compatible CSS and core-js polyfills, the app should be very robust across browsers and platforms. So far, I’ve been able to test it in browsers dating back to 2014. While it sounds like a gimmick, it’s actually an important part of the philosophy: annotators should be able to access the Prodigy app, even if they’re on an old Android device or a Windows XP machine in their university library. This is still a problem, especially in non-western countries.
The UI itself went through many iterations. Some of my early drafts of binary, card-based interfaces date back to around 2016, but I really only started building them for full-featured annotation workflows much later. I managed to dig up some old screenshots that show the evolution
Thank you for sharing!
I did notice your blog post "How Front-End dev can improve AI" all the way back in 2016, it's clear you guys have been thinking this over for a while now. I totally agree with the core conclusions you've reached, that the bottleneck is in the mode of supervision b/w humans and AI, and the importance of tooling there. Thanks for the look into the front-end process, and I especially like this:
The annotation interfaces like ner, ner_manual, image and so on are developed as React components, which all receive the same data as props.
I think it shows a lot of care went into the data model when designing the front end--one of the nice things about prodigy is how portable the annotations are.
Thanks a lot! And yes, I almost forgot about the blog post, haha. Here it is again, in case others come across this thread later: