I am currently trying to obtain the alpha value of a pixel in a UIImageView. I have obtained the CGImage from [UIImageView image] and created a RGBA byte array from this. Alpha is premultiplied.
CGImageRef image = uiImage.CGImage;
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
rawData = malloc(height * width * 4);
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(
rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
I then calculate the array index for the given alpha channel using the coordinates from the UIImageView.
int byteIndex = (bytesPerRow * uiViewPoint.y) + uiViewPoint.x * bytesPerPixel;
unsigned char alpha = rawData[byteIndex + 3];
However I don't get the values I expect. For a completely black transparent area of the image I get non-zero values for the alpha channel. Do I need to translate the co-ordinates between UIKit and Core Graphics - i.e: is the y-axis inverted? Or have I misunderstood premultiplied alpha values?
Update:
@Nikolai Ruhe's suggestion was key to this. I did not in fact need to translate between UIKit coordinates and Core Graphics coordinates. However, after setting the blend mode my alpha values were what I expected:
CGContextSetBlendMode(context, kCGBlendModeCopy);
If all you want is the alpha value of a single point, all you need is an alpha-only single-point buffer. I believe this should suffice:
// assume im is a UIImage, point is the CGPoint to test
CGImageRef cgim = im.CGImage;
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextDrawImage(context, CGRectMake(-point.x,
-point.y,
CGImageGetWidth(cgim),
CGImageGetHeight(cgim)),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;
If the UIImage doesn't have to be recreated every time, this is very efficient.
EDIT December 8 2011:
A commenter points out that under certain circumstances the image may be flipped. I've been thinking about this, and I'm a little sorry that I didn't write the code using the UIImage directly, like this (I think the reason is that at the time I didn't understand about UIGraphicsPushContext
):
// assume im is a UIImage, point is the CGPoint to test
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[im drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;
I think that would have solved the flipping issue.