I'm trying to replicate a multiple view annotation task from a paper, which requires the annotator to correctly identify a tree on four different street view panoramas.
I can't figure out how to display multiple images with the blocks UI as stipulated by the documentation. I'm wondering if the tools / javascript to do a custom image.manual style using a full html UI exist within prodigy
Here is a sample of that interface from the paper, the goal is to identify the same tree (purple box) on each of the four successive panoramas, or to mark it as not present. (the red circle represents the lat, lon of a known tree projected on the panorama).
Hi! This is a cool idea It might be a bit tricky to implement via the blocks UI at the moment, though, because the image_manual UI expects the image to be available via the key "image". We should introduce an option to change this, just like you can already customize the "spans" key that the annotations are stored under. Then you could overwrite these config settings for each block, e.g. "config": {"image_manual_spans_key": "spans1", "image_manual_image_key": "image1"}. So basically, block 1 could read from image1 and write to spans1, and so on. This should be straightforward to add, but it's not implemented yet.
In the meantime, one solution would be to use an image editing library to automatically concatenate the images and store the offsets. So you know that image 2 starts at height 500 and ends at height 1000, and so on. With that information, it should be super easy to map the bounding box y values back to their position relative to the image. For instance, a box with "x": 100, "y": 750 would map to "x": 100, "y": 250 on image 2.