Hello,
I would like to create a ner.manual task on a created ocr-textlayer. Because the ocr has some errors, it would be nice to display the original image next to the ner.manual interface.
My JSONL for the ner Tasks looks like this:
{"text": "my super simple dummy text", "meta": {"filename": "E:\input\ocr\dummy.txt", "image_source": "E:\input\jpg\dummy.jpg"}}
Therefore the information of the image path is given with the json. I am aware of the problem, that the browser maybe is not displaying local file image, so I will create an url afterwards, instead of a file path.
As I tried to solve this issue, I came accross different solution options in my mind but I don´t know which one is the easiest.
- Create custom_recipe based on ner_manual with blocks and a image and ner_manual interface, but i don´t need to annotate the image.
blocks = [
{"view_id": "image"},
{"view_id": "ner_manual"}]
My Problem is, that I dont know how to embed the image from
# Load the stream from a JSONL file and return a generator that yields a
# dictionary for each example in the data.
stream = JSONL(source)
into the stream and not to create 2 independent streams one with text and one with the images (fetch_media).
- Use something with a html-template
How to embed then in one column the image and the other column the ner_manual interface.
I think this should be a very common task, so maybe someone already did something like this and can give some advice or some code I can get an orientation how to implement it.
Thanks
Akiono