views:

157

answers:

5

I've hooked up a UITapGestureRecognizer to a UIImageView containing the image I'd like to display on an iPad screen and am able to consume the user taps just fine. However, my image is that of a hand on a table and I'd like to know if the user has tapped on the hand or on the table part of the image. I can get the x,y coordinates of the user tap with CGPoint tapLocation = [recognizer locationInView:self.view]; but I'm at a loss for how to map that CGPoint to, say, the region of the image that contains the hand vs. the region that contains the table. Everything I've read so far deals with determining if a CGPoint is in a particular rectangular area, but what if you need to determine if that CGPoint is located in the boundaries of a more irregular shape? Is that even possible? Any suggestions or just pointing me in the right direction would be a big help. Thanks!

+2  A: 

You could use pointInside:withEvent: to define the hit area programmatically.

To elaborate, you just take the point and evaluate to see if it falls in the area you're after with a series of if statements. If it does, return TRUE. If it doesn't, return FALSE. If this is related to this post, then you could use a circular conditional to compare the distance of the point to the center of your circle using Pythagorean Theorem.

umop
Could you elaborate on this idea a bit for me? How would you define this somewhat amorphous hand shape programmatically?
ScottS
I edited the post to elaborate on the answer.
umop
The problem is how to find the area (series of coordinates) for an irregular (non-rectangular, non-circular, non-triangular, non-use-geometry-to-figure-out) shape? I'm looking for how (or if) one can determine user taps to a fairly fine degree accuracy inside or outside of, say, an amoeba or a human hand?
ScottS
pointInside should work. You'll just have to write the logic for it. If you want to make a series of if statements defining the area, you could do that. If you wanted to check an image to see if a bit was black or white, you could do that. Outside of that, I think you'd be stuck with using a series of button objects. I am not aware of any shape-based button hit-area unfortunately.
umop
I was stuck on how to determine the color or alpha value of the pixel that was tapped until I found this little bit of goodness:http://www.markj.net/iphone-uiimage-pixel-color/ I modified the code to only return the alpha value and then changed the area of the image outside the hand shape to be transparent so it'll have an alpha of 0. I used UITapGestureRecognizer and locationInView for the view that it's attached to - works like a champ and could be used with any shape/color/transparency combo you'd like. Thanks for pointing me in the right direction.
ScottS
A: 

You can use a bounding rectangle that covers most or all of the hand.

If the user is using his finger to tap either the hand or the table, I doubt that you want him or her to be extremely precise with the tap.

Gilbert Le Blanc
That's actually somewhat close to my current solution, but I feel like it's very klugey and that there's got to be a better way. The image of the hand on the table takes up almost the whole ipad app screen and so there's plenty of screen real estate for the user to tap on fingers or on the table between the fingers. So, I've removed the gesture recognizer from the UIImageView holding the hand image and have placed a series of small, rectangular UIImageViews hooked to recognizers inside the hand boundary and outside it. Feels very rough and imprecise so just looking for a better way.
ScottS
@ScottS: When I wrote my answer I was imagining a large cursor size hand.
Gilbert Le Blanc
A: 

An extension of the bounding rectangle answer,

  • you could define several smaller bounding rectangles that would approximate a hand without covering the rest of the screen.

OR

  • you could use a list of rectangles, for each of your objects and put the hand at the end of the list. In this case, if you had a tap on button X on the top right hand of the screen which is technically inside the hand rectangle, it would choose the button X because that rectangle is found first.
piggles
The solution I went with is the "define several smaller bounding rectangles that would approximate a hand" idea. It works reliably but requires an increasing numbers of smaller rectangular UIViews in order to gain tap recognition precision. I'm very open to new ideas to solve this problem but until then I've got some 38 odd UIViews placed on my hand image, all hooked up for precise gesture recognition. The code feels brittle and bloated but in the absence of any other solution this will at least enable the feature.
ScottS
@ScottS I'm not an expert on iPhone programming per se, but can you use only one `UIView` which contains the 38 rectangles and discard the gesture event if it's not inside the hand/rectangles? that would make your code much less 'brittle' if you come across a better solution (the existence of which I am certain)
piggles
Mechko, I also need to pick up gestures outside the hand image, between the fingers for example, so I have to have two sets of progressively smaller rectangles to line the edges of the image area inside the hand and outside of it. But I am also confident that there is a better solution than this.
ScottS
A: 
  • define the shape by a black and white bitmap (1 bit per pixel). Check if the particular bit is set. This would eat a lot of memory if you had a lot of large shapes, but for one bitmap with a hand, it should not be a big deal.
  • define the shape as a polygon. Then you need to do point-in-polygon test. Wikipedia has a wonderful article on this, with links to code here: http://en.wikipedia.org/wiki/Point_in_polygon
  • iPad libraries might have this already implemented. Sorry, I cannot help you there, not an iPad developer.
Roman Zenka
A: 

late to the party, but the core tool you want here is a "point in polygon" routine. this is a generic approach, independent of iOS.

google has lots of info, but the general approach is:

1) define your closed polygon. - it sounds like this might be a bit of work in your case.

2) choose any point not equal to your original point. (yes, any point)

3) for each edge in the polygon, determine if the ray from your original point through the seconds point intersects with that polygon edge. - this requires a line-segment-intersect-ray routine, also available on the 'tubes.

4) if the number of intersections is odd, it's inside the polygon. if the count is even, it's outside.

for general geometry-type issues, i highly recommend Paul Bourke: http://local.wasp.uwa.edu.au/~pbourke/geometry/insidepoly/

orion elenzil