Writing robust (color and size invariant) circle detection with OpenCV (based on Hough transform or other features)

memyself picture memyself · Mar 25, 2012 · Viewed 49.6k times · Source

I wrote the following very simple python code to find circles in an image:

import cv
import numpy as np

WAITKEY_DELAY_MS = 10
STOP_KEY = 'q'

cv.NamedWindow("image - press 'q' to quit", cv.CV_WINDOW_AUTOSIZE);
cv.NamedWindow("post-process", cv.CV_WINDOW_AUTOSIZE);

key_pressed = False
while key_pressed != STOP_KEY:

    # grab image
    orig = cv.LoadImage('circles3.jpg')

    # create tmp images
    grey_scale = cv.CreateImage(cv.GetSize(orig), 8, 1)
    processed = cv.CreateImage(cv.GetSize(orig), 8, 1)


    cv.Smooth(orig, orig, cv.CV_GAUSSIAN, 3, 3)

    cv.CvtColor(orig, grey_scale, cv.CV_RGB2GRAY)

    # do some processing on the grey scale image
    cv.Erode(grey_scale, processed, None, 10)
    cv.Dilate(processed, processed, None, 10)
    cv.Canny(processed, processed, 5, 70, 3)
    cv.Smooth(processed, processed, cv.CV_GAUSSIAN, 15, 15)

    storage = cv.CreateMat(orig.width, 1, cv.CV_32FC3)

    # these parameters need to be adjusted for every single image
    HIGH = 50
    LOW = 140

    try: 
        # extract circles
        cv.HoughCircles(processed, storage, cv.CV_HOUGH_GRADIENT, 2, 32.0, HIGH, LOW)

        for i in range(0, len(np.asarray(storage))):
            print "circle #%d" %i
            Radius = int(np.asarray(storage)[i][0][2])
            x = int(np.asarray(storage)[i][0][0])
            y = int(np.asarray(storage)[i][0][1])
            center = (x, y)

            # green dot on center and red circle around
            cv.Circle(orig, center, 1, cv.CV_RGB(0, 255, 0), -1, 8, 0)
            cv.Circle(orig, center, Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0)

            cv.Circle(processed, center, 1, cv.CV_RGB(0, 255, 0), -1, 8, 0)
            cv.Circle(processed, center, Radius, cv.CV_RGB(255, 0, 0), 3, 8, 0)

    except:
        print "nothing found"
        pass

    # show images
    cv.ShowImage("image - press 'q' to quit", orig)
    cv.ShowImage("post-process", processed)

    cv_key = cv.WaitKey(WAITKEY_DELAY_MS)
    key_pressed = chr(cv_key & 255)

As you can see from the following two examples, the 'circle finding quality' varies quite a lot:

CASE1:

input1 detection1 post-processed1

CASE2:

input2 detection2 post-processed2

Case1 and Case2 are basically the same image, but still the algorithm detects different circles. If I present the algorithm an image with differently sized circles, the circle detection might even fail completely. This is mostly due to the HIGH and LOW parameters which need to be adjusted individually for each new picture.

Therefore my question: What are the various possibilities of making this algorithm more robust? It should be size and color invariant so that different circles with different colors and in different sizes are detected. Maybe using the Hough transform is not the best way of doing things? Are there better approaches?

Answer

fireant picture fireant · Apr 16, 2012

The following is based on my experience as a vision researcher. From your question you seem to be interested in possible algorithms and methods rather only a working piece of code. First I give a quick and dirty Python script for your sample images and some results are shown to prove it could possibly solve your problem. After getting these out of the way, I try to answer your questions regarding robust detection algorithms.

Quick Results

Some sample images (all the images apart from yours are downloaded from flickr.com and are CC licensed) with the detected circles (without changing/tuning any parameters, exactly the following code is used to extract the circles in all the images): detected blobs in the sample image 1 detected blobs in the sample image 2 lots of circles blobs in the flickr image 1

Code (based on the MSER Blob Detector)

And here is the code:

import cv2
import math
import numpy as np

d_red = cv2.cv.RGB(150, 55, 65)
l_red = cv2.cv.RGB(250, 200, 200)

orig = cv2.imread("c.jpg")
img = orig.copy()
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

detector = cv2.FeatureDetector_create('MSER')
fs = detector.detect(img2)
fs.sort(key = lambda x: -x.size)

def supress(x):
        for f in fs:
                distx = f.pt[0] - x.pt[0]
                disty = f.pt[1] - x.pt[1]
                dist = math.sqrt(distx*distx + disty*disty)
                if (f.size > x.size) and (dist<f.size/2):
                        return True

sfs = [x for x in fs if not supress(x)]

for f in sfs:
        cv2.circle(img, (int(f.pt[0]), int(f.pt[1])), int(f.size/2), d_red, 2, cv2.CV_AA)
        cv2.circle(img, (int(f.pt[0]), int(f.pt[1])), int(f.size/2), l_red, 1, cv2.CV_AA)

h, w = orig.shape[:2]
vis = np.zeros((h, w*2+5), np.uint8)
vis = cv2.cvtColor(vis, cv2.COLOR_GRAY2BGR)
vis[:h, :w] = orig
vis[:h, w+5:w*2+5] = img

cv2.imshow("image", vis)
cv2.imwrite("c_o.jpg", vis)
cv2.waitKey()
cv2.destroyAllWindows()

As you can see it's based on the MSER blob detector. The code doesn't preprocess the image apart from the simple mapping into grayscale. Thus missing those faint yellow blobs in your images is expected.

Theory

In short: you don't tell us what you know about the problem apart from giving only two sample images with no description of them. Here I explain why I in my humble opinion it is important to have more information about the problem before asking what are efficient methods to attack the problem.

Back to the main question: what is the best method for this problem? Let's look at this as a search problem. To simplify the discussion assume we are looking for circles with a given size/radius. Thus, the problem boils down to finding the centers. Every pixel is a candidate center, therefore, the search space contains all the pixels.

P = {p1, ..., pn} 
P: search space
p1...pn: pixels

To solve this search problem two other functions should be defined:

E(P) : enumerates the search space
V(p) : checks whether the item/pixel has the desirable properties, the items passing the check are added to the output list

Assuming the complexity of the algorithm doesn't matter, the exhaustive or brute-force search can be used in which E takes every pixel and passes to V. In real-time applications it's important to reduce the search space and optimize computational efficiency of V.

We are getting closer to the main question. How we could define V, to be more precise what properties of the candidates should be measures and how should make solve the dichotomy problem of splitting them into desirable and undesirable. The most common approach is to find some properties which can be used to define simple decision rules based on the measurement of the properties. This is what you're doing by trial and error. You're programming a classifier by learning from positive and negative examples. This is because the methods you're using have no idea what you want to do. You have to adjust / tune the parameters of the decision rule and/or preprocess the data such that the variation in the properties (of the desirable candidates) used by the method for the dichotomy problem are reduced. You could use a machine learning algorithm to find the optimal parameter values for a given set of examples. There's a whole host of learning algorithms from decision trees to genetic programming you can use for this problem. You could also use a learning algorithm to find the optimal parameter values for several circle detection algorithms and see which one gives a better accuracy. This takes the main burden on the learning algorithm you just need to collect sample images.

The other approach to improve robustness which is often overlooked is to utilize extra readily available information. If you know the color of the circles with virtually zero extra effort you could improve the accuracy of the detector significantly. If you knew the position of the circles on the plane and you wanted to detect the imaged circles, you should remember the transformation between these two sets of positions is described by a 2D homography. And the homography can be estimated using only four points. Then you could improve the robustness to have a rock solid method. The value of domain-specific knowledge is often underestimated. Look at it this way, in the first approach we try to approximate some decision rules based on a limited number of sample. In the second approach we know the decision rules and only need to find a way to effectively utilize them in an algorithm.

Summary

To summarize, there are two approaches to improve the accuracy / robustness of the solution:

  1. Tool-based: finding an easier to use algorithm / with fewer number of parameters / tweaking the algorithm / automating this process by using machine learning algorithms
  2. Information-based: are you using all the readily available information? In the question you don't mention what you know about the problem.

For these two images you have shared I would use a blob detector not the HT method. For background subtraction I would suggest to try to estimate the color of the background as in the two images it is not varying while the color of the circles vary. And the most of the area is bare.