I'm coding a software (for my studies) for traffic sign recognition with an IP Camera.For a moment, I have to recognise traffic signs like this:
In my code, I'm doing a high transformation to isolate my traffic sign with a mask.
Then, I do a SURF comparison (with a modified sample of the OpenCV documentation of SURF) of scene image with a few images of different traffic signs (30,50,70,90).
I give you an example of my object reference: http://www.noelshack.com/2015-05-1422561271-object-exemple.jpg
My questions are:
Is my "way to do" is right? Is SURF really adapted here because it seems that it uses a lot of resources..
I have false positives (when i compare 30 in object with 50 in the scene for example) , how to reduce it ?
Yes, this is a task that you usually expect a SURF matching method to work. If two signs are similar you expect their SURF features to mostly match.
However, it is not the only method that may work. You may want to also try SIFT, or FAST feature matching too. They might decrease the number of your false positives. You may also want to try to play with your SURF feature matching parameters, for example the way you compare the features and the thresholds for accepting the match. For example, a 30km sign may match to both 30km and 50km signs in some cases. Then you will need to look for other criteria to separate them, like the number of matched features, or the percentage of images in your labeled set that matched the sign.
If you still get unsatisfactory results, I would suggest give it a try with cascade of classification method with HOG features for detecting the numbers like '3', '5', '7', etc. You will need to train your classifier with a set of cropped numbers in your labeled signs, and use the cascade of classifier to detect those numbers in your test images. Cascade of classifiers is also implemented in OpenCV.