I wish I would find an answer for this. I have searched and searched and couldn't the right answer. Here is my situation:
In a Mac OS Cocoa Application, I want to draw a pixel (actually a few pixels) onto a dedicated area on my application window. I figured, it would be nicer to have a NSImageView
placed there (I did so with IB and connected the outlet to my app delegate) and draw on that instead of my NSWindow
.
How in the world can I do that? Mac OS seems to offer NSBezierPath
as the most basic drawing tool — is that true? This is completely shocking to me. I come from a long history of Windows programming and drawing a pixel onto a canvas is the most simple thing, typically.
I do not want to use OpenGL and I am not sure to what extent Quartz is involved in this.
All I want is some help on how I can pull off this pseudocode in real Objective-C/Cocoa:
imageObj.drawPixel(10,10,blackColor);
I would love to hear your answers on this and I am sure this will help a lot of people starting with Cocoa.
Thanks!
What you are asking for is either of these two methods:
NSBitmapRep setColor:atX:y: Changes the color of the pixel at the specified coordinates.
NSBitmapRep setPixel:atX:y: Sets the receiver's pixel at the specified coordinates to the specified raw pixel values.
Note that these aren't available on iOS. On iOS, it appears that the way to do this is to create a raw buffer of pixel data for a given colorspace (likely RGB), fill that with color data (write a little setPixel method to do this) and then call CGImageCreate() like so:
//Create a raw buffer to hold pixel data which we will fill algorithmically
NSInteger width = theWidthYouWant;
NSInteger height = theHeightYouWant;
NSInteger dataLength = width * height * 4;
UInt8 *data = (UInt8*)malloc(dataLength * sizeof(UInt8));
//Fill pixel buffer with color data
for (int j=0; j<height; j++) {
for (int i=0; i<width; i++) {
//Here I'm just filling every pixel with red
float red = 1.0f;
float green = 0.0f;
float blue = 0.0f;
float alpha = 1.0f;
int index = 4*(i+j*width);
data[index] =255*red;
data[++index]=255*green;
data[++index]=255*blue;
data[++index]=255*alpha;
}
}
// Create a CGImage with the pixel data
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
provider, NULL, true, kCGRenderingIntentDefault);
//Clean up
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
// Don't forget to free(data) when you are done with the CGImage
Lastly, you might be wanting to manipulate pixels in an image you've already loaded into a CGImage. There is sample code for doing that in an Apple Technical Q&A titled QA1509 Getting the pixel data from a CGImage object.