How would you detect touches only on non-transparent pixels of a UIImageView
, efficiently?
Consider an image like the one below, displayed with UIImageView
. The goal is be to make the gesture recognisers respond only when the touch happens in the non-transparent (black in this case) area of the image.
hitTest:withEvent:
or pointInside:withEvent:
, although this approach might be terribly inefficient as these methods get called many times during a touch event.Here's my quick implementation: (based on Retrieving a pixel alpha value for a UIImage)
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
//Using code from https://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0f;
BOOL transparent = alpha < 0.01f;
return !transparent;
}
This assumes that the image is in the same coordinate space as the point
. If scaling goes on, you may have to convert the point
before checking the pixel data.
Appears to work pretty quickly to me. I was measuring approx. 0.1-0.4 ms for this method call. It doesn't do the interior space, and is probably not optimal.