I have trained an object detector using tensorflow's object detection API on Google Colab. After researching on the internet for most of the day, I haven't been able to find a tutorial about how to run an evaluation for my model, so I can get metrics like mAP.
I figured out that I have to use the eval.py from the models/research/object_detection folder, but I'm not sure which parameters should I pass to the script.
Shortly, what I've done so far is, generated the labels for the test and train images and stored them under the object_detection/images folder. I have also generated the train.record and test.record files, and I have written the labelmap.pbtxt file. I am using the faster_rcnn_inception_v2_coco model from the tensorflow model zoo, so I have configured the faster_rcnn_inception_v2_coco.config file, and stored it in the object_detection/training folder. The training process ran just fine and all the checkpoints are stored also in the object_detection/training folder.
Now that I have to evaluate the model, I ran the eval.py script like this:
!python eval.py --logtostderr --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config --checkpoint_dir=training/ --eval_dir=eval/
Is this okay? Because this started running fine, but when I opened tensorboard there were only two tabs, namely images and graph, but no scalars. Also, I ran tensorboard with logdir=eval.
I am new to tensorflow, so any kind of help is welcome. Thank you.
The setup looks good. I had to wait a long time for the Scalars tab to load/show up alongside the other two - like 10 minutes after the evaluation job finished.
But at the end of the evaluation job, it prints in the console all the scalar metrics that will be displayed in the Scalars tab:
Accumulating evaluation results...
DONE (t=1.57s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.434
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.693
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.470
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
etc.
If you want to use the new model_main.py
script instead of legacy/eval.py
, you can call it like
python model_main.py --alsologtostderr --run_once --checkpoint_dir=/dir/with/checkpoint/at/one/timestamp --model_dir=eval/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
Note that this new API would require the optimizer
field in train_config
, which is probably already in your pipeline.config
since you're using the same for both training and evaluation.