views:

5210

answers:

2

I'm trying to create an image mask that from a composite of two existing images.

First I start with creating the composite which consists of a small image that is the masking image, and a larger image which is the same size as the background:

UIImage * BaseTextureImage = [UIImage imageNamed:@"background.png"];
UIImage * MaskImage = [UIImage imageNamed:@"my_mask.jpg"];
UIImage * ShapesBase = [UIImage imageNamed:@"largerimage.jpg"];
UIImage * MaskImageFull;

CGSize finalSize = CGSizeMake(480.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[ShapesBase drawInRect:CGRectMake(0, 0, 480, 320)];
[MaskImage drawInRect:CGRectMake(150, 50, 250, 250)];
MaskImageFull = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

I can output this UIImage (MaskImageFull) and it looks right, it is a fullsize background size and it is a white background with my mask object in black, in the right place on the screen.

I then pass the MaskImageFull UIImage through this:

CGImageRef maskRef = [maskImage CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
 CGImageGetHeight(maskRef),
 CGImageGetBitsPerComponent(maskRef),
 CGImageGetBitsPerPixel(maskRef),
 CGImageGetBytesPerRow(maskRef),
 CGImageGetDataProvider(maskRef), NULL, false);

CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage* retImage= [UIImage imageWithCGImage:masked];

The problem is that the retImage is all black. If I send a pre-made UIImage in as the mask it works fine, it is just when I try to make it from multiple images that it breaks.

I thought it was a colorspace thing but couldn't seem to fix it. Any help is much appreciated!

+3  A: 

I tried the same thing with CGImageCreateWithMask, and got the same result. The solution I found was to use CGContextClipToMask instead:

CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;

colorSpace = CGColorSpaceCreateDeviceRGB();

// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);

// free the rgb colorspace
CGColorSpaceRelease(colorSpace);    

if (mainViewContentContext==NULL)
 return NULL;

CGImageRef maskImage = [[UIImage imageNamed:@"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);


// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);

// convert the finished resized image to a UIImage 
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can 
// release the original
CGImageRelease(mainViewContentBitmapContext);

// return the image
return theImage;
catlan
A: 

The image to be masked MUST be created with an alpha channel. The Alpha channel may not be created from the code.

Wes Duff