views:

127

answers:

1

I have three png's "320 x 480px" that I am loading into separate UIImageViews. The png's names are body, mouth, hat. By stacking the images on top of each other I create a character whose body parts can be swapped out easily. See photo>

http://www.1976inc.com/dev/iphone/beast.jpg

My Problem is that when you touch the top most UIImageView the whole image including the transparency registers the touch event. What I would like to do is make it so touch events are only register on the parts of the pngs that are not transparent. Therefore making it so that the user can interact with all three UIImageViews.

I'm sure this is simple but I'm new to iphone development and I can't seem to figure it out.


Update So I've realized the easiest way to accomplish what I want to do is create loop through and create a context for each png then get the color data for the pixels where the touch event occurred. If the pixel represents a transparent area I move onto the next image and try the same thing. This works but only the first time. For instance the first time I click on the main view I get this output

2010-07-26 15:50:06.285 colorTest[21501:207] hat
2010-07-26 15:50:06.286 colorTest[21501:207] offset: 227024 colors: RGB A 0 0 0 0
2010-07-26 15:50:06.293 colorTest[21501:207] mouth
2010-07-26 15:50:06.293 colorTest[21501:207] offset: 227024 colors: RGB A 0 0 0 0
2010-07-26 15:50:06.298 colorTest[21501:207] body
2010-07-26 15:50:06.299 colorTest[21501:207] offset: 227024 colors: RGB A 255 255 255 255

which is exactly what I would want to see. But if I click the same area again I get.

2010-07-26 15:51:21.625 colorTest[21501:207] hat
2010-07-26 15:51:21.626 colorTest[21501:207] offset: 283220 colors: RGB A 255 255 255 255
2010-07-26 15:51:21.628 colorTest[21501:207] mouth
2010-07-26 15:51:21.628 colorTest[21501:207] offset: 283220 colors: RGB A 255 255 255 255
2010-07-26 15:51:21.630 colorTest[21501:207] body
2010-07-26 15:51:21.631 colorTest[21501:207] offset: 283220 colors: RGB A 255 255 255 255

Here is the code that I am Using;

the touch event exists in the mainView for the app

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
 NSLog(@"Touched balls");
 UITouch *touch = [touches anyObject];
 CGPoint point = [touch locationInView:self.view];

 UIColor *transparent = [UIColor colorWithRed:0 green:0 blue:0 alpha:0];

 for( viewTest *currentView in imageArray){
  //UIColor *testColor = [self getPixelColorAtLocation:point image:currentView.image];
  [currentView getPixelColorAtLocation:point];

 }

}

It makes a call to a method in a custom class that extends imageView The function returns the color of the pixel under the touchEvent.

- (UIColor*) getPixelColorAtLocation:(CGPoint)point
{
 UIColor *color = nil;
 CGImageRef inImage = self.image.CGImage;

 CGContextRef context = [self createARGBBitmapContextFromImage:inImage];

 if(context == NULL) return nil;

 size_t w = CGImageGetWidth(inImage);
 size_t h = CGImageGetHeight(inImage);
 CGRect rect = {{0,0},{w,h}}; 

 // Draw the image to the bitmap context. Once we draw, the memory
 // allocated for the context for rendering will then contain the
 // raw image data in the specified color space.
 CGContextDrawImage(context, rect, inImage); 

 // Now we can get a pointer to the image data associated with the bitmap
 // context.
 unsigned char* data = CGBitmapContextGetData (context);
 if (data != NULL) {
  //offset locates the pixel in the data from x,y.
  //4 for 4 bytes of data per pixel, w is width of one row of data.
  int offset = 4*((w*round(point.y))+round(point.x));
  int alpha =  data[offset];
  int red = data[offset+1];
  int green = data[offset+2];
  int blue = data[offset+3];
  NSLog(@"%@",name);
  NSLog(@"offset: %i colors: RGB A %i %i %i  %i ",offset,red,green,blue,alpha);
  color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
 }

 // When finished, release the context
 CGContextRelease(context);

 // Free image data memory for the context
 if (data) { free(data); }

 return color;
}

- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {

 CGContextRef    context = NULL;
 CGColorSpaceRef colorSpace;
 void *          bitmapData;
 int             bitmapByteCount;
 int             bitmapBytesPerRow;

 // Get image width, height. We'll use the entire image.
 size_t pixelsWide = CGImageGetWidth(inImage);
 size_t pixelsHigh = CGImageGetHeight(inImage);

 // Declare the number of bytes per row. Each pixel in the bitmap in this
 // example is represented by 4 bytes; 8 bits each of red, green, blue, and
 // alpha.
 bitmapBytesPerRow   = (pixelsWide * 4);
 bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);

 // Use the generic RGB color space.
 colorSpace = CGColorSpaceCreateDeviceRGB();//CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
 if (colorSpace == NULL)
 {
  fprintf(stderr, "Error allocating color space\n");
  return NULL;
 }

 // Allocate memory for image data. This is the destination in memory
 // where any drawing to the bitmap context will be rendered.
 bitmapData = malloc( bitmapByteCount );
 if (bitmapData == NULL)
 {
  fprintf (stderr, "Memory not allocated!");
  CGColorSpaceRelease( colorSpace );
  return NULL;
 }

 // Create the bitmap context. We want pre-multiplied ARGB, 8-bits
 // per component. Regardless of what the source image format is
 // (CMYK, Grayscale, and so on) it will be converted over to the format
 // specified here by CGBitmapContextCreate.
 context = CGBitmapContextCreate (bitmapData,
          pixelsWide,
          pixelsHigh,
          8,      // bits per component
          bitmapBytesPerRow,
          colorSpace,
          kCGImageAlphaPremultipliedFirst);
 if (context == NULL)
 {
  free (bitmapData);
  fprintf (stderr, "Context not created!");
 }

 // Make sure and release colorspace before returning
 CGColorSpaceRelease( colorSpace );

 return context;
}

Update 2 Thanks for the quick response. I'm not sure if I follow you. If I change the hidden to true then the UIImageView "layer" is hidden. What I want is for the Transparent portion of the png to not register touch events. So for instance if you look at the image I included in the post. If you click on the worm, stem or leaves "which are all part of the same png" a touch event is fired by that ImageView but if you touch the circle then a touch event is fired by that ImageView. BTW here is the code I am using to place them in the view.

UIView *tempView = [[UIView alloc] init];
[self.view addSubview:tempView];


UIImageView *imageView1 = [[UIImageView alloc] initWithImage:[UIImage  imageNamed:@"body.png"] ];
[imageView1 setUserInteractionEnabled:YES];
UIImageView *imageView2 = [[UIImageView alloc] initWithImage:[UIImage  imageNamed:@"mouth.png"] ];
[imageView2 setUserInteractionEnabled:YES];
UIImageView *imageView3 = [[UIImageView alloc] initWithImage:[UIImage  imageNamed:@"hat.png"] ];
[imageView3 setUserInteractionEnabled:YES];

[tempView addSubview:imageView1];
[tempView addSubview:imageView2];
[tempView addSubview:imageView3];

[self.view addSubview:tempView];
A: 

First off:

You can use transparency but hiding the image probably will suit your needs.

You can hide the image with the following command: [myImage setHidden:YES]; or myImage.hidden = YES;

if (CGRectContainsPoint(myImage.frame, touchPosition)==true && myImage.hidden==NO) 
{
}

This makes sure your the image is not transparent on click because myImage.hidden==NO checks to see if the image is hidden or not.

thyrgle