views:

260

answers:

2

Objective-C / Cocoa: I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code:

NSString *filePath = [NSString stringWithFormat: @"%@%@",@"/Users/adam/Documents/phoneimages/", [outLabel stringValue]];  //this coming from a window control
NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath];

With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070.

I have tried replacing the second line of code with:

NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage];

But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage.

I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both).

And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension.

Thanks for any help!

A: 

Edit:

Carl has a better answer. This is only good if you also want to manipulate the image in some way, like scaling or changing color mode.

Original:

I would probably use core graphics for this. NSImage and most NSImageRep were not designed to have a pixel array sitting around. They convert source image data into pixels only when drawn.

When you create a CGBitmapContext, you can pass it a buffer of pixels to use. That is your two dimensional array, with whatever row bytes, color depth, pixel format and other properties you specify.

You can initialize a CGImage with JPG or PNG data using CGImageCreateWithJPEGDataProvider or CGImageCreateWithPNGDataProvider respectively.

Once you draw the image into the context with CGContextDrawImage the original buffer you passed to CGBitmapContextCreate is now filled with the pixel data of the image.

  1. Create a CGDataProvider with the image data.
  2. Create a CGImageRef with the data provider.
  3. Create a buffer large enough for the image pixels.
  4. Create a CGBitmapContext that is the size of the image with the pixel buffer.
  5. Draw the image into the bitmap context.
  6. Access the pixels in the buffer.

If you want to use NSImage instead of CGImage, you can create an NSGraphicsContext from a CGContext with graphicsContextWithGraphicsPort:flipped: and set it as the current context. That basically replaces steps 1 and 2 above with whatever code you want to use to make an NSImage.

drawnonward
+1  A: 

Easy!

NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *imageRep = [[controlImage representations] objectAtIndex:0];

Then to get the actual bitmap pixel data:

unsigned char *pixelData = [imageRep bitmapData];

If your image has multiple representations (it probably doesn't), you can get them out of that same array. The same code will work for your .png images.

Carl Norum
Thanks so much! That does get the data into an array for me to work on.
Adam