I am searching a aulgorithm to find the similarity between two images, and I found SSIM, and even the codes like:
function [mssim, ssim_map] = SSIM(img1, img2, K, window, L)
if (size(img1) ~= size(img2))
ssim_index = -Inf;
ssim_map = -Inf;
return;
end
[M N] = size(img1);
if (nargin == 2)
if ((M < 11) || (N < 11))
ssim_index = -Inf;
ssim_map = -Inf;
return
end
window = fspecial('gaussian', 11, 1.5); %
K(1) = 0.01; % default settings
K(2) = 0.03; %
L = 255; %
end
C1 = (K(1)*L)^2;
C2 = (K(2)*L)^2;
window = window/sum(sum(window));
img1 = double(img1);
img2 = double(img2);
mu1 = filter2(window, img1, 'valid');
mu2 = filter2(window, img2, 'valid');
mu1_sq = mu1.*mu1;
mu2_sq = mu2.*mu2;
mu1_mu2 = mu1.*mu2;
sigma1_sq = filter2(window, img1.*img1, 'valid') - mu1_sq;
sigma2_sq = filter2(window, img2.*img2, 'valid') - mu2_sq;
sigma12 = filter2(window, img1.*img2, 'valid') - mu1_mu2;
if (C1 > 0 && C2 > 0)
ssim_map = ((2*mu1_mu2 + C1).*(2*sigma12 + C2))./((mu1_sq + mu2_sq + C1).*(sigma1_sq + sigma2_sq + C2));
else
numerator1 = 2*mu1_mu2 + C1;
numerator2 = 2*sigma12 + C2;
denominator1 = mu1_sq + mu2_sq + C1;
denominator2 = sigma1_sq + sigma2_sq + C2;
ssim_map = ones(size(mu1));
index = (denominator1.*denominator2 > 0);
ssim_map(index) = (numerator1(index).*numerator2(index))./(denominator1(index).*denominator2(index));
index = (denominator1 ~= 0) & (denominator2 == 0);
ssim_map(index) = numerator1(index)./denominator1(index);
end
mssim = mean2(ssim_map);
return
img1=imread (image1);
img2=imread (image2);
[mssim ssim_map] = SSIM(img1, img2);
I can get some values from this source code, but may I know whether this method is posible for rotation situations, such as one picture I rotate to a certain degree and does this method will detect actually the rotated image and original image have the same shape?
Thanks very much, please help me!
SSIM is not rotation invariant. That is, if ImgA is a rotated version of ImgB the SSIM( ImgA, ImgB ) is not likely to be high.
So, if you want to detect the relative rotation angle between ImgA and ImgB you would have to rotate ImgA through all possible angles and compare the rotated version to ImgB.
This is not a very efficient method and you might find other methods that are more efficient for detecting rotation.
If I recall correctly, you are dealing mostly with binary masks of closed curves. I believe a better choise for rotation detection in your case would be to use rotation invariane version of shape-context descriptors combined in some robust rigid transformation estimation method (like Ransac).