I am writing an iPhone application and need to essentially implement something equivalent to the 'eyedropper' tool in photoshop, where you can touch a point on the image and capture the RGB values for the pixel in question to determine and match its color. Getting the UIImage is the easy part, but is there a way to convert the UIImage data into a bitmap representation in which I could extract this information for a given pixel? A working code sample would be most appreciated, and note that I am not concerned with the alpha value.
I don't like Apples way of doing it but I would guess no one would answer since they have signed an NDA in which they agree not to talk about their programming with anyone.
You can't access the bitmap data of a UIImage directly.
You need to get the CGImage representation of the UIImage. Then get the CGImage's data provider, from that a CFData representation of the bitmap. Make sure to release the CFData when done.
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
You will probably want to look at the bitmap info of the CGImage to get pixel order, image dimensions, etc.
To do something similar in my application, I created a small off-screen CGImageContext, and then rendered the UIImage into it. This allowed me a fast way to extract a number of pixels at once. This means that you can set up the target bitmap in a format you find easy to parse, and let CoreGraphics do the hard work of converting between color models or bitmap formats.
Lajos's answer worked for me. To get the pixel data as an array of bytes, I did this:
UInt8* data = CFDataGetBytePtr(bitmapData);
More info: CFDataRef documentation.
Also, remember to include CoreGraphics.framework
Hello I am also trying to accomplish this
my problem is reading the pixel data correctly once i have the pointer
so i have an image that takes up the whole iphone screen that has no alpha channel and i want to find out the color of a pixel (24 bits per pixel, 8 bits per component, 960 bytes per row)
i have the pointer to the data
UInt8 *data = CFDataGetBytePtr(bitmapData);
but now i am not sure how to index into the data correctly given an X,Y coordinate, any help on this would be appreciated
-jeff
I dont know how to index into image data correctly based on given X,Y cordination. Does anyone know?
A little more detail...
I posted earlier this evening with a consolidation and small addition to what had been said on this page - that can be found at the bottom of this post. I am editing the post at this point, however, to post what I propose is (at least for my requirements, which include modifying pixel data) a better method, as it provides writable data (whereas, as I understand it, the method provided by previous posts and at the bottom of this post provides a read-only reference to data).
Writable Pixel Information - method 1:
Step 1. I defined constants
#define RGBA 4
#define RGBA_8_BIT 8
Step 2. In my UIImage subclass I declared instance variables:
size_t bytesPerRow;
size_t byteCount;
size_t pixelCount;
CGContextRef context;
CGColorSpaceRef colorSpace;
UInt8* pixelByteData;
// A pointer to an array of RGBA bytes in memory
RPVW_RGBAPixel* pixelData;
Step 3. The pixel struct (with alpha in this version)
typedef struct RGBAPixel {
byte red;
byte green;
byte blue;
byte alpha;
} RGBAPixel;
Step 4. Bitmap function (returns pre-calculated RGBA; divide RGB by A to get unmodified RGB):
- (RGBAPixel*) bitmap {
NSLog( @"Returning bitmap representation of UIImage." );
// 8 bits each of red, green, blue, and alpha.
[self setBytesPerRow: self.size.width * RGBA];
[self setByteCount: bytesPerRow * self.size.height];
[self setPixelCount: self.size.width * self.size.height];
// Create RGB color space
[self setColorSpace: CGColorSpaceCreateDeviceRGB()];
if ( ! colorSpace ) {
NSLog( @"Error allocating color space." );
return nil;
}
[self setPixelData: malloc( byteCount )];
if ( ! pixelData ) {
NSLog( @"Error allocating bitmap memory. Releasing color space." );
CGColorSpaceRelease( colorSpace );
return nil;
}
// Create the bitmap context.
// Pre-multiplied RGBA, 8-bits per component.
// The source image format will be converted to the format specified here by CGBitmapContextCreate.
[self setContext: CGBitmapContextCreate( (void*) pixelData,
self.size.width,
self.size.height,
RGBA_8_BIT,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast )];
// Make sure we have our context
if ( ! context ) {
free( pixelData );
NSLog( @"Context not created!" );
}
// Draw the image to the bitmap context.
// The memory allocated for the context for rendering will then contain the raw image pixelData in the specified color space.
CGRect rect = { { 0 , 0 }, { self.size.width, self.size.height } };
CGContextDrawImage( context, rect, self.CGImage );
// Now we can get a pointer to the image pixelData associated with the bitmap context.
pixelData = (RGBAPixel*) CGBitmapContextGetData( context );
return pixelData;
}
Read-Only Data (Previous information) - method 2:
Step 1. I declared a type for byte:
typedef unsigned char byte;
Step 2. I declared a struct to correspond to a pixel:
typedef struct RGBPixel {
byte red;
byte green;
byte blue;
} RGBPixel;
Step 3. I subclassed UIImageView and declared (with corresponding synthesized properties):
// Reference to Quartz CGImage for receiver (self)
CFDataRef bitmapData;
// Buffer holding raw pixel data copied from Quartz CGImage held in receiver (self)
UInt8* pixelByteData;
// A pointer to the first pixel element in an array
RGBPixel* pixelData;
Step 4. Subclass code I put in a method named bitmap (to return the bitmap pixel data):
// Get the bitmap data from the receiver's CGImage (see UIImage docs)
[self setBitmapData: CGDataProviderCopyData( CGImageGetDataProvider( [self CGImage] ) )];
// Create a buffer to store bitmap data (unitialized memory as long as the data)
[self setPixelBitData: malloc( CFDataGetLength( bitmapData ) )];
// Copy image data into allocated buffer
CFDataGetBytes( bitmapData, CFRangeMake( 0, CFDataGetLength( bitmapData ) ), pixelByteData );
// Cast a pointer to the first element of pixelByteData
// Essentially what we're doing is making a second pointer that divides the byteData's units differently - instead of dividing each unit as 1 byte we will divide each unit as 3 bytes (1 pixel).
pixelData = (RGBPixel*) pixelByteData;
// Now you can access pixels by index: pixelData[ index ]
NSLog( @"Pixel data one red (%i), green (%i), blue (%i).", pixelData[0].red, pixelData[0].green, pixelData[0].blue );
// You can determine the desired index by multiplying row * column.
return pixelData;
Step 5. I made an accessor method:
- (RGBPixel*) pixelDataForRow: (int) row
column: (int) column {
// Return a pointer to the pixel data
return &pixelData[ row * column ];
}
For those that couldn't get the above to work (me) there is this useful post: http://www.markj.net/iphone-uiimage-pixel-color/
You can see the whole implementation there.