Imagine this: I have an PNG image that shows teddy bear, or a triangle. Something non-quadratic. Now, I want to find out, at which point for a given row and direction (coordinates relativ to the UIImageView's coordinate system) the actual visible image starts.
Example: Lets say, I need to know where the feet of the teddy bear begins from the left, when looking at the last line. Sure that's not just frame.origin.x, because the feet is not quadratic. It may begin somewhere at x=12.
I would iterate somehow over the image data and inspect the pixels: "Hey, are you transparent?". And if it is, I go on to the next one. "Hey, what's up with you? Transparent?". Until I get the result: "Nope! I'm totally opaque!". Then I know: "Right! Here the feet starts in the PNG! That's the boundary!".
Then, I do that for every line and get some kind of path coordinates. Personally, I need this to detect the perfect point for rotation achsis, as I want to make it wiggle realistic on a floor. I can't just use the frame width and origin information for that. It would not be realistic.
So: Is there a way to introspect the data of an UIImage or image at all, in order to check if a pixel is transparent or not?