Image size in web application

image
front-end
solved

(Joseph Prusa) #1

Is there a best practice or suggested way of scaling the size at which images are displayed?

I have hooked a simple tensorflow model for image classification (mnist) into a teach recipe and noticed the images are very small in the web application on account of being 28x28 pixels.

Also, for object detection manual annotation and a teach recipe I wrote for yolo, image size can be an issue for small objects as they can be hard to select or see. Again changing display size, or preferable a zoom option, would be helpful.


(Ines Montani) #2

Thanks for sharing what you’re building – this sounds very exciting and I’m definitely curious to hear how you go with training a TF model in the loop! :raised_hands:

The JSON data for the image supports a "width" and "height" property, which Prodigy should respect and take at face value. So you could try setting that to, say, 140, and the images should be scaled up, at least visually in the browser. If no "width" and "height" are present, the web app will read those values off the original image.

The custom_theme setting in the recipe config and/or prodigy.json supports a cardMaxWidth property, which is the maximum width of the “annotation card” in pixels. This setting currently defaults to 675, but if you’re working with large images, you can increase this value to have the image take up a larger part of the screen.

(The interface should also respond to the native browser zoom, but that’s obviously not that satisfying.)


(Joseph Prusa) #3

Thank you, setting a fixed "width" and "height" does scale the images. I imagine using the html interface with the right html_template could be set to autoscale images with browser size. I’ll try playing with that too.

Right now I have just have a simple update method that trains the model on the batch of examples evaluated by the user. Its more of a proof of concept than anything useful since this results in small changes to the model that may not be noticeable to the user. I am looking into various strategies for learning with extremely limited labeled data and planning to test several approaches. If I figure something out that works well I’ll be sure to post about it.