views:

937

answers:

2

I have a feeling this is not an easy task but I need to combine or flatten a UIImageView with another UIImage view lying above it. For example: I have two UIImageViews. One of them has a UIImage of a grassy field (1200 x 1200 pixels). The other is a UIImage of a basketball (128 x 128 pixels), and it is positioned above the image of the grassy field in such a way that the basketball appears to be on the grassy field. I want to be able to SAVE the superimposed UIImageViews as a single image file to my photo album which means that I will need to combine the two images somehow. How would this be accomplished? (NOTE: Taking a screenshot (320 x 480 pixels) would not be an acceptable solution as I wish to preserve the size of 1200 x 1600 pixels.

QUESTION:
How can I flatten multiple UIImageViews into one and SAVE the resulting image while preserving the size/resolution.

+2  A: 

Why don't you just draw original UIImages into a background buffer on top of each other and then write a combined image to a file? Below is an example how you can draw two images to the same buffer:

CGImageRef bgimage = [bguiimage CGImage];
width = CGImageGetWidth(bgimage);
height = CGImageGetHeight(bgimage);

// Create a temporary texture data buffer
GLUbyte* data = (GLubyte *) malloc(width * height * 4);
assert(data);

// Draw image to buffer
CGContextRef ctx = CGBitmapContextCreate(data, width, height, 8, width * 4, CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast);
assert(ctx);

// Flip image upside-down because OpenGL coordinates differ
CGContextTranslateCTM(ctx, 0, height);
CGContextScaleCTM(ctx, 1.0, -1.0);

CGContextDrawImage(ctx, CGRectMake(0, 0, (CGFloat)width, (CGFloat)height), bgimage);

CGImageRef ballimage = [balluiimage CGImage];
bwidth = CGImageGetWidth(ballimage);
bheight = CGImageGetHeight(ballimage);

float x = (width - bwidth) / 2.0;
float y = (height - bheight) / 2.0;
CGContextDrawImage(ctx, CGRectMake(x, y, (CGFloat)bwidth, (CGFloat)bheight), ballimage);

CGContextRelease(ctx);
tequilatango
Thanks for this suggestion. Does this preserve the resolution/pixels of the original images or do they end up having the same size as the views that hold them?
RexOnRoids
It preserves the resolution, because you draw the original image.
tequilatango
+1  A: 

This takes any view and makes a UIImage out of it. Any view and it's subviews will be "flattened" into a UIImage that you can display or save to disk.

  - (UIImage*)imageFromView{

    UIImage *image;

    UIGraphicsBeginImageContext(self.view.bounds.size);
    [self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
    image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return image;

}
Corey Floyd
By the way, Corey, is there a way to do this (your above code) while preserving the image resolution of the view that is the base layer (self.view.layer in your code)? I have a photo sized 1200 x 1600 but after the processing (using your code) the resulting image has the pixel size of the UIImageView that was its container (320 x 427).
RexOnRoids
I can't say for sure, but the tactic used by tequilatango appears to accomplish this. The method I use will only capture what is on screen at the screens resolution. It is quick and dirty, but works. If you need better results, you'll need to draw the image into the buffer as he illustrated. Of course, since this is the iPhone, if there is a way for you to decrease the size of your images without affecting UX then you may want to consider it for memory/performance reasons.
Corey Floyd