views:

25

answers:

1

Hi, right now im working on an application that accepts cgimageref, process the pixels of the image and returns the processed cgimageref. To check how it works i simply wrote a code to pass the cgimageref and expects the same image to be returned with out being processed. But the problem is that im not getting back the exact image. the resulting image color is completely changed. If this is not the right way to do this then plz suggest me a better way to do this. Here is the code

  - (CGImageRef) invertImageColor :(CGImageRef) imageRef  {
     CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
     UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(dataRef);
     // my editing code goes here but for testing this part is omitted. i will it later             
     //when this issue is solved.
     CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
                                         CGImageGetWidth( imageRef ),
                                         CGImageGetHeight( imageRef ),
                     CGImageGetBitsPerComponent(imageRef),
                     CGImageGetBytesPerRow( imageRef ),
                                             CGImageGetColorSpace( imageRef ),
                                             kCGImageAlphaPremultipliedFirst );

     CGImageRef newImageRef = CGBitmapContextCreateImage (ctx);
     CGContextRelease(ctx);

     return newImageRef;}
+1  A: 

You're assuming that the input image has its alpha premultiplied and stored before the color components. Don't assume that. Get the image's bitmap info and pass that to CGBitmapContextCreate.

Note that CGBitmapContext doesn't work with all possible pixel formats. If your input image is in a pixel format that CGBitmapContext doesn't like, you're just going to need to use a separate buffer and draw the input image into the context.

Peter Hosey