CV - Extract differences between two images

Valentin Trinqué picture Valentin Trinqué · Nov 20, 2014 · Viewed 84.5k times · Source

I am currently working on an intrusion system based on video surveillance. In order to complete this task, I take a snapshot of the background of my scene (assume it's totally clean, no people or moving objects). Then, I compare the frame I get from the (static) video camera and look for the differences. I have to be able to check any differences, not only human shape or whatever, so I cannot specific feature extraction.

Typically, I have:

http://postimg.org/image/dxtcp4u8h/

I am using OpenCV, so to compare I basically do:

cv::Mat bg_frame;
cv::Mat cam_frame;
cv::Mat motion;

cv::absdiff(bg_frame, cam_frame, motion);
cv::threshold(motion, motion, 80, 255, cv::THRESH_BINARY);
cv::erode(motion, motion, cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3,3)));

Here is the result:

http://postimg.org/image/3kz0o62id/

As you can see, the arm is stripped (due to color differential conflict I guess) and this is sadly not what I want.

I thought about add the use of cv::Canny() in order to detect the edges and fill the missing part of the arm, but sadly (once again), it only solves the problem in few situation not most of them.

Is there any algorithm or technique I could use to obtain an accurate difference report?

PS: Sorry for the images. Due to my newly subscription, I do not have enough reputation.

EDIT I use grayscale image in here, but I am open to any solution.

Answer

Micka picture Micka · Nov 20, 2014

One problem in your code is cv::threshold which only uses 1 channel images. Finding the pixelwise "difference" between two images in only grayscale often leads to unintuitive results.

Since your provided images are a bit translated or the camera wasnt stationary, I've manipulated your background image to add some foreground:

background image:

enter image description here

foreground image:

enter image description here

code:

    cv::Mat diffImage;
    cv::absdiff(backgroundImage, currentImage, diffImage);

    cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC1);

    float threshold = 30.0f;
    float dist;

    for(int j=0; j<diffImage.rows; ++j)
        for(int i=0; i<diffImage.cols; ++i)
        {
            cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);

            dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
            dist = sqrt(dist);

            if(dist>threshold)
            {
                foregroundMask.at<unsigned char>(j,i) = 255;
            }
        }

giving this result:

enter image description here

with this difference image:

enter image description here

in general it is hard to compute a complete foreground/background segmentation from pixel-wise difference interpretations.

You will probably have to add postprocessing stuff to get a real segmentation, where you start from your foreground mask. Not sure whether there are any stable universal solutions yet.

As berak mentioned, in practice it won't be enough to use a single background image, so you will have to compute/manage your background image over time. There are plenty of papers covering this topic and afaik no stable universal solution yet.

here are some more tests. I converted to HSV color space: cv::cvtColor(backgroundImage, HSVbackgroundImagebg, CV_BGR2HSV); cv::cvtColor(currentImage, HSV_currentImage, CV_BGR2HSV); and performed the same operations in this space, leading to this result:

enter image description here

after adding some noise to the input:

enter image description here

I get this result:

enter image description here

so maybe the threshold is a bit too high. I still encourage you to have a look at HSV color space too, but you might have to reinterpret the "difference image" and rescale each channel to combine their difference values.