OpenCV: Efficient Difference-of-Gaussian

peanutman picture peanutman · Jan 7, 2013 · Viewed 20.8k times · Source

I am trying to implement difference of guassians (DoG), for a specific case of edge detection. As the name of the algorithm suggests, it is actually fairly straightforward:

Mat g1, g2, result;
Mat img = imread("test.png", CV_LOAD_IMAGE_COLOR); 
GaussianBlur(img, g1, Size(1,1), 0);
GaussianBlur(img, g2, Size(3,3), 0);
result = g1 - g2;

However, I have the feeling that this can be done more efficiently. Can it perhaps be done in less passes over the data?

The question here has taught me about separable filters, but I'm too much of an image processing newbie to understand how to apply them in this case.

Can anyone give me some pointers on how one could optimise this?

Answer

Abhishek Thakur picture Abhishek Thakur · Jan 7, 2013

Separable filters work in the same way as normal gaussian filters. The separable filters are faster than normal Gaussian when the image size is large. The filter kernel can be formed analytically and the filter can be separated into two 1 dimensional vectors, one horizontal and one vertical.

for example..

consider the filter to be

1 2 1
2 4 2
1 2 1

this filter can be separated into horizontal vector (H) 1 2 1 and vertical vector(V) 1 2 1. Now these sets of two filters are applied to the image. Vector H is applied to the horizontal pixels and V to the vertical pixels. The results are then added together to get the Gaussian Blur. I'm providing a function that does the separable Gaussian Blur. (Please dont ask me about the comments, I'm too lazy :P)

Mat sepConv(Mat input, int radius)
{


Mat sep;
Mat dst,dst2;

int ksize = 2 *radius +1;
double sigma = radius / 2.575;

Mat gau = getGaussianKernel(ksize, sigma,CV_32FC1);

Mat newgau = Mat(gau.rows,1,gau.type());
gau.col(0).copyTo(newgau.col(0));


filter2D(input, dst2, -1, newgau);


filter2D(dst2.t(), dst, -1, newgau);


return dst.t();


}

One more method to improve the calculation of Gaussian Blur is to use FFT. FFT based convolution is much faster than the separable kernel method, if the data size is pretty huge.

A quick google search provided me with the following function

Mat Conv2ByFFT(Mat A,Mat B)
{
Mat C;
// reallocate the output array if needed
C.create(abs(A.rows - B.rows)+1, abs(A.cols - B.cols)+1, A.type());
Size dftSize;
// compute the size of DFT transform
dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);
dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);

// allocate temporary buffers and initialize them with 0's
Mat tempA(dftSize, A.type(), Scalar::all(0));
Mat tempB(dftSize, B.type(), Scalar::all(0));

// copy A and B to the top-left corners of tempA and tempB, respectively
Mat roiA(tempA, Rect(0,0,A.cols,A.rows));
A.copyTo(roiA);
Mat roiB(tempB, Rect(0,0,B.cols,B.rows));
B.copyTo(roiB);

// now transform the padded A & B in-place;
// use "nonzeroRows" hint for faster processing
Mat Ax = computeDFT(tempA);
Mat Bx = computeDFT(tempB);

// multiply the spectrums;
// the function handles packed spectrum representations well
mulSpectrums(Ax, Bx, Ax,0,true);

// transform the product back from the frequency domain.
// Even though all the result rows will be non-zero,
// we need only the first C.rows of them, and thus we
// pass nonzeroRows == C.rows
//dft(Ax, Ax, DFT_INVERSE + DFT_SCALE, C.rows);

updateMag(Ax);
Mat Cx = updateResult(Ax);

//idft(tempA, tempA, DFT_SCALE, A.rows + B.rows - 1 );
// now copy the result back to C.
Cx(Rect(0, 0, C.cols, C.rows)).copyTo(C);
//C.convertTo(C, CV_8UC1);
// all the temporary buffers will be deallocated automatically
return C;

}

Hope this helps. :)