views:

329

answers:

3

Imagine this: I have an PNG image that shows teddy bear, or a triangle. Something non-quadratic. Now, I want to find out, at which point for a given row and direction (coordinates relativ to the UIImageView's coordinate system) the actual visible image starts.

Example: Lets say, I need to know where the feet of the teddy bear begins from the left, when looking at the last line. Sure that's not just frame.origin.x, because the feet is not quadratic. It may begin somewhere at x=12.

I would iterate somehow over the image data and inspect the pixels: "Hey, are you transparent?". And if it is, I go on to the next one. "Hey, what's up with you? Transparent?". Until I get the result: "Nope! I'm totally opaque!". Then I know: "Right! Here the feet starts in the PNG! That's the boundary!".

Then, I do that for every line and get some kind of path coordinates. Personally, I need this to detect the perfect point for rotation achsis, as I want to make it wiggle realistic on a floor. I can't just use the frame width and origin information for that. It would not be realistic.

So: Is there a way to introspect the data of an UIImage or image at all, in order to check if a pixel is transparent or not?

+2  A: 

There appears not to be a reasonably easy way to do this using UIImage. Instead, I see two simpler options.

Ideally, if you control the images being used, you can precalculate the data you need. Then you can either reformat images such that the point you're interested in is in the center of the image (which you can calculate easily client-side with only the width and height), or send the co-ordinate with the image. This saves the client having to recalculate for each image, which could potentially speed up your application and save battery life.

Alternatively, image libraries like libpng can be compiled statically into your program. You can load the image, do the processing, and then unload it and pass the file off into UIImage. Only the functions you use would be linked in, so you may be able to avoid adding too much bloat as the rendering routines might be omitted. This does have the disadvantage of making your software rely on third-party libraries.

Jason Owen
+3  A: 

I found this tutorial on the net where you can get the UIColor (ARGB) value of each pixel on the screen. maybe if you tweek that a little you can pass it an UIImage and get all the values that you need

Andy Jacobs
That works, though it creates a copy of the image in order to extract a single pixel from it. Obviously, you'd cache that for the original poster's application.
Mark Bessey
The copy is done to get an kind of "normalized" graphics context. CG takes care of converting the context from the image to a specified one. So then you can iterate over the pixels. Well, in theory. In praxis it didn't work for the alpha. The results are very unsafe.
Thanks
+1  A: 

You might find it easier to create an in-memory context with a known format, then render the image into the in-memory context. Note that the destination context can be a different bitmap format and color mode than the original image, designed for easy reading.

Alternatively, if you're a glutton for punishment, you can get the CGImage property of the UIImage, then use the CoreGraphics functions to determine the pixel format, then write code to decode the pixel format data...

Last option, if you control the images being used: Create a mask image (1-bit alpha) from the image, then use that for determining where the edges are. You can create such a mask in a graphics editor easily, or you could probably do it by drawing the image into a 1-bit in-memory context, if you set the drawing properties correctly.

Mark Bessey
How would I create that in-memory context with a known format?
Thanks
Actually, Andy's answer below has a link to a pretty good example.
Mark Bessey