I am planning to use the Tensorflow Object Detection models with data annotated with Prodigy. Do I need to resize all the images first before I annotate? Or can I annotate the orginal and somehow feed that into the models?
Hi @pl6306 ,
I'm not that familiar with the Tensorflow Object Detection models but if they require a specific size, it may be wise to resize the images first before annotating as the bounding boxes you'll get in Prodigy correspond to pixel coordinates. By resizing, you don't need to translate the points again into the right coordinates.
Were you able to get the model-in-the-loop setup with tensorflow working? If so, any tips?
I tried following the guide but without much success myself.
Not yet but I am going to work through it. I will let you know if I get it working.
This Training a classifier to detect redacted documents with fastai | mlops.systems posting leads me to believe that resizing happens after annotation.
For Tensorflow 2.0 Object Detection API, I believe the model handles the resizing. See models/defining_your_own_model.md at master · tensorflow/models · GitHub
DetectionModels should make no assumptions about the input size or aspect ratio --- they are responsible for doing any resize/reshaping necessary (see docstring for the preprocess function).