Difference between "Edge Detection" and "Image Contours"

PeakGen picture PeakGen · Jun 14, 2013 · Viewed 41.2k times · Source

I am working on the following code:

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

using namespace std;
using namespace cv;

Mat src, grey;
int thresh = 10;

const char* windowName = "Contours";

void detectContours(int,void*);

int main()
{
    src = imread("C:/Users/Public/Pictures/Sample Pictures/Penguins.jpg");

    //Convert to grey scale
    cvtColor(src,grey,CV_BGR2GRAY);

    //Remove the noise
    cv::GaussianBlur(grey,grey,Size(3,3),0);

    //Create the window
    namedWindow(windowName);

    //Display the original image
    namedWindow("Original");
    imshow("Original",src);

    //Create the trackbar
    cv::createTrackbar("Thresholding",windowName,&thresh,255,detectContours);

    detectContours(0,0);
    waitKey(0);
    return 0;

}

void detectContours(int,void*)
{
    Mat canny_output,drawing;

    vector<vector<Point>> contours;
    vector<Vec4i>heirachy;

    //Detect edges using canny
    cv::Canny(grey,canny_output,thresh,2*thresh);

    namedWindow("Canny");
    imshow("Canny",canny_output);

    //Find contours
    cv::findContours(canny_output,contours,heirachy,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,Point(0,0));

    //Setup the output into black
    drawing = Mat::zeros(canny_output.size(),CV_8UC3);



    //Draw contours
    for(int i=0;i<contours.size();i++)
    {
        cv::drawContours(drawing,contours,i,Scalar(255,255,255),1,8,heirachy,0,Point());
    }

    imshow(windowName,drawing);

}

Theoretically, Contours means detecting curves. Edge detection means detecting Edges. In my above code, I have done edge detection using Canny and curve detection by findContours(). Following are the resulting images

Canny Image

enter image description here

Contours Image

enter image description here

So now, as you can see, there is no difference! So, what is the actual difference between these 2? In OpenCV tutorials, only the code is given. I found an explanation about what is 'Contours' but it is not addressing this issue.

Answer

sansuiso picture sansuiso · Jun 14, 2013

Edges are computed as points that are extrema of the image gradient in the direction of the gradient. if it helps, you can think of them as the min and max points in a 1D function. The point is, edge pixels are a local notion: they just point out a significant difference between neighbouring pixels.

Contours are often obtained from edges, but they are aimed at being object contours. Thus, they need to be closed curves. You can think of them as boundaries (some Image Processing algorithms & librarires call them like that). When they are obtained from edges, you need to connect the edges in order to obtain a closed contour.