views:

185

answers:

2

My big picture goal is to have a grey field over an image, and then as the user rubs on that grey field, it reveals the image underneath. Basically like a lottery scratcher card. I've done a bunch of searching through the docs, as well as this site, but can't find the solution.

The following is just a proof of concept to test "erasing" an image based on where the user touches, but it isn't working. :(

I have a UIView that detects touches, then sends the coords of the move to the UIViewController that clips the image in a UIImageView by doing the following:

- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to
{
    UIImage* image = bkgdImageView.image;
    CGSize s = image.size;
    UIGraphicsBeginImageContext(s);
    CGContextRef g = UIGraphicsGetCurrentContext();

    CGContextMoveToPoint(g, from.x, from.y);
    CGContextAddLineToPoint(g, to.x, to.y);
    CGContextClosePath(g);
    CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height));
    CGContextEOClip(g);
    [image drawAtPoint:CGPointZero];
    bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    [bkgdImageView setNeedsDisplay];
}

The problem is that the touches are sent to this method just fine, but nothing happens on the original.

Am I doing the clip path incorrectly? Or?

Not really sure...so any help you may have would be greatly appreciated.

Thanks in advance, Joel

A: 

You usually want to draw into the current graphics context inside of a drawRect: method, not just any old method. Also, a clip region only affects what is drawn to the current graphics context. But instead of going into why this approach isn't working, I'd suggest doing it differently.

What I would do is have two views. One with the image, and one with the gray color that is made transparent. This allows the graphics hardware to cache the image, instead of trying to redraw the image every time you modify the gray fill.

The gray one would be a UIView subclass with CGBitmapContext that you would draw into to make the pixels that the user touches clear.

There are probably several ways to do this. I'm just suggesting one way above.

lucius
Thanks for pointing me in the right direction. I was looking into that before, but couldn't find the right way to 1) create an ARGB bitmap (it seems like it is always an RGB) and 2) manipulate the pixel's alpha value once I have the 2D array of pixel data. I'll keep digging and post what I find out. Thanks.
Joel
+1  A: 

I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, i knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, i knew i could ise it on the iPhone. Doing what you want to do with OpenCV is extremely easy. First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way.

+ (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;
//This is the function you use to convert a UIImage -> IplImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
                                                iplimage->depth, iplimage->widthStep,
                                                colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);


CGContextRelease(contextRef);

CGColorSpaceRelease(colorSpace);

return iplimage;}



+ (UIImage *)UIImageFromIplImage:(IplImage *)image {

//Convert a IplImage -> UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
//NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
                                    image->depth, image->depth * image->nChannels, image->widthStep,
                                    colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
                                    provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
[data release];
return ret;}

Now that you have both the basic functions you need you can do whatever you want with your IplImage: this is what you want:

+(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{
//r is the radious of the erasing
    int a = point.x;
 int b = point.y;
 int position;
 int minX,minY,maxX,maxY;
 minX = (a-r>0)?a-r:0;
 minY = (b-r>0)?b-r:0;
 maxX = ((a+r) < (image->width))? a+r : (image->width);
 maxY = ((b+r) < (image->height))? b+r : (image->height);

 for (int i = minX; i < maxX ; i++)
 {
    for(int j=minY; j<maxY;j++)
    {
        position =  ((j-b)*(j-b))+((i-a)*(i-a));
        if (position <= r*r)
        {
            uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels);
            ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0;
        }
    }
}
UIImage * res = [self UIImageFromIplImage:image]; 
return res;}

Sorry for the formatting.

If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's

If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces

sicario