Hello! Suppose the target I would like to do bounding box annotation is not just a image, but a multiple-pages article. Is there a way I can merge multiple png files together and use image.manual with scrolling page option? Thanks.
image_manual UI assumes that you're annotating one image file at a time – but how you put that file together is up to you. So you could write a custom recipe that loads in your data (e.g. from multi-page PDFs), puts them together into one image, converts the image to a base64 string and sends it out as the
"image". You can use the task
"meta" to store additional info about the image, so you always know which document it relates to. And then you just need to store the
y offset of each page, so you can easily calculate the bounding box position relative to the original page.
I haven't really done much image manipulation in Python, but it sounds like a fairly simple task. You can probably do it all with
Pillow Or you could use any other tool or programming language, preprocess your data and generate the JSONL to stream into Prodigy.