views:

15593

answers:

12

I've noticed that there are many questions about how to handle UIImage objects, especially in conjunction with UIImagePickerController and then displaying it in a view (usually a UIImageView). Here is a collection of common questions and their answers. Feel free to edit and add your own.

I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you!

How Do I Select An Image From the User's Images or From the Camera?

You use UIImagePickerController. The documentation for the class gives a decent overview of how one would use it, and can be found here.

Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want.

My Delegate Was Called - How Do I Get The Media?

The delegate method signature is the following:

- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info;

You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example:

UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage];

There are other keys that work as well, all in the documentation.

OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives?

Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage, they strip it of all the EXIF/Geolocation data.

Can I Get To The Original File Representing This Image on the Disk?

Nope. For security purposes, you only get the UIImage.

How Can I Look At The Underlying Pixels of the UIImage?

Since the UIImage is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this:

UIImage* image = ...; // An image
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
unsigned char* pixelBytes = (unsigned char *)[pixelData bytes];

// Take away the red pixel, assuming 32-bit RGBA
for(int i = 0; i < [pixelData length]; i += 4) {
        pixelBytes[i] = 0; // red
        pixelBytes[i+1] = pixelBytes[i+1]; // green
        pixelBytes[i+2] = pixelBytes[i+2]; // blue
        pixelBytes[i+3] = pixelBytes[i+3]; // alpha
}

However, note that CGDataProviderCopyData provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS error if you do). Look at the next question if you want to see how you can modify the pixels.

How Do I Modify The Pixels of the UIImage?

The UIImage is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article.

Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage with the modified pixels:

CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];

Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory.

After I Select 3 Images From The Camera, I Run Out Of Memory. Help!

You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage, they become uncompressed. A quick over-the-envelope calculation would be:

width x height x 4 = bytes in memory

That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4 with a 2.

Now, images taken with the camera are 1600 x 1200 pixels, so let's do the math:

1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB

8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory.

OK, I Understand Why I Have No Memory. What Do I Do?

There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320 pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image?

If the answer is yes, then you should save it to disk for later use.

If the answer is no, then read the next part.

Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image.

OK, I'm Hooked. How Do I Resize the Image?

Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one.

There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each.

Method 1: Using UIKit

+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize;
{
    // Create a graphics image context
    UIGraphicsBeginImageContext(newSize);

    // Tell the old image to draw in this new context, with the desired
    // new size
    [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];

    // Get the new image from the context
    UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();

    // End the context
    UIGraphicsEndImageContext();

    // Return the new image.
    return newImage;
}

This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However, this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200 pixel image), this is very much an operation you may want to do in the background, on a separate thread.

Method 2: Using CoreGraphics

+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize;
{
    CGFloat targetWidth = targetSize.width;
    CGFloat targetHeight = targetSize.height;

    CGImageRef imageRef = [sourceImage CGImage];
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
    CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);

    if (bitmapInfo == kCGImageAlphaNone) {
        bitmapInfo = kCGImageAlphaNoneSkipLast;
    }

    CGContextRef bitmap;

    if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
        bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

    } else {
        bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

    }   

    if (sourceImage.imageOrientation == UIImageOrientationLeft) {
        CGContextRotateCTM (bitmap, radians(90));
        CGContextTranslateCTM (bitmap, 0, -targetHeight);

    } else if (sourceImage.imageOrientation == UIImageOrientationRight) {
        CGContextRotateCTM (bitmap, radians(-90));
        CGContextTranslateCTM (bitmap, -targetWidth, 0);

    } else if (sourceImage.imageOrientation == UIImageOrientationUp) {
        // NOTHING
    } else if (sourceImage.imageOrientation == UIImageOrientationDown) {
        CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
        CGContextRotateCTM (bitmap, radians(-180.));
    }

    CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
    CGImageRef ref = CGBitmapContextCreateImage(bitmap);
    UIImage* newImage = [UIImage imageWithCGImage:ref];

    CGContextRelease(bitmap);
    CGImageRelease(ref);

    return newImage; 
}

The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does.

How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)?

It is very similar to the method above, and it looks like this:

+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize;
{  
    CGSize imageSize = sourceImage.size;
    CGFloat width = imageSize.width;
    CGFloat height = imageSize.height;
    CGFloat targetWidth = targetSize.width;
    CGFloat targetHeight = targetSize.height;
    CGFloat scaleFactor = 0.0;
    CGFloat scaledWidth = targetWidth;
    CGFloat scaledHeight = targetHeight;
    CGPoint thumbnailPoint = CGPointMake(0.0,0.0);

    if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
        CGFloat widthFactor = targetWidth / width;
        CGFloat heightFactor = targetHeight / height;

        if (widthFactor > heightFactor) {
            scaleFactor = widthFactor; // scale to fit height
        }
        else {
            scaleFactor = heightFactor; // scale to fit width
        }

        scaledWidth  = width * scaleFactor;
        scaledHeight = height * scaleFactor;

        // center the image
        if (widthFactor > heightFactor) {
            thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; 
        }
        else if (widthFactor < heightFactor) {
            thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
        }
    }     

    CGImageRef imageRef = [sourceImage CGImage];
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
    CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);

    if (bitmapInfo == kCGImageAlphaNone) {
        bitmapInfo = kCGImageAlphaNoneSkipLast;
    }

    CGContextRef bitmap;

    if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
        bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

    } else {
        bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

    }   

    // In the right or left cases, we need to switch scaledWidth and scaledHeight,
    // and also the thumbnail point
    if (sourceImage.imageOrientation == UIImageOrientationLeft) {
        thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
        CGFloat oldScaledWidth = scaledWidth;
        scaledWidth = scaledHeight;
        scaledHeight = oldScaledWidth;

        CGContextRotateCTM (bitmap, radians(90));
        CGContextTranslateCTM (bitmap, 0, -targetHeight);

    } else if (sourceImage.imageOrientation == UIImageOrientationRight) {
        thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
        CGFloat oldScaledWidth = scaledWidth;
        scaledWidth = scaledHeight;
        scaledHeight = oldScaledWidth;

        CGContextRotateCTM (bitmap, radians(-90));
        CGContextTranslateCTM (bitmap, -targetWidth, 0);

    } else if (sourceImage.imageOrientation == UIImageOrientationUp) {
        // NOTHING
    } else if (sourceImage.imageOrientation == UIImageOrientationDown) {
        CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
        CGContextRotateCTM (bitmap, radians(-180.));
    }

    CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef);
    CGImageRef ref = CGBitmapContextCreateImage(bitmap);
    UIImage* newImage = [UIImage imageWithCGImage:ref];

    CGContextRelease(bitmap);
    CGImageRelease(ref);

    return newImage; 
}

The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio.

So We've Got Our Scaled Images - How Do I Save Them To Disk?

This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here):

NSData* UIImagePNGRepresentation(UIImage *image);
NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality);

And if you want to use them, you'd do something like:

UIImage* myThumbnail = ...; // Get some image
NSData* imageData = UIImagePNGRepresentation(myThumbnail);

Now we're ready to save it to disk, which is the final step (say into the documents directory):

// Give a name to the file
NSString* imageName = @"MyImage.png";

// Now, we have to find the documents directory so we can save it
// Note that you might want to save it elsewhere, like the cache directory,
// or something similar.
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];

// Now we get the full path to the file
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName];

// and then we write it out
[imageData writeToFile:fullPathToFile atomically:NO];

You would repeat this for every version of the image you have.

How Do I Load These Images Back Into Memory?

Just look at the various UIImage initialization methods, such as +imageWithContentsOfFile: in the Apple documentation.

+1  A: 

Great tutorial. Thanks. But I can't get the "modify pixels" code working.

  1. I guess bytes[i] should be changed to pixelBytes[i]?

  2. I get a dereferencing void* pointer warning and an invalid use of void expression error when compiling.

Your code:

UIImage* image = ...; // An image
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
void* pixelBytes = [pixelData bytes];

// Take away the red pixel, assuming 32-bit RGBA
for(int i = 0; i < [pixelData length]; i += 4) {
    bytes[i] = 0; // red
    bytes[i+1] = bytes[i+1]; // green
    bytes[i+2] = bytes[i+2]; // blue
    bytes[i+3] = bytes[i+3]; // alpha
}
Superpanic
You're right, it should be `pixelBytes`. Also, cast it to a `byte*` (you could do `byte* pixelBytes = (byte*) [pixelData bytes]).
Itay
Look at the modified "question" - I posted corrected code.
Itay
A: 

it doesn't working...

here is my code

-(IBAction)ButtonClicked
 {
  UIImage* image = Image; // An image
  NSData* pixelData = (NSData*)CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
  Byte *pixelBytes = (Byte *)[pixelData bytes];

 // Take away the red pixel, assuming 32-bit RGBA
 for(int i = 0; i < [pixelData length]; i += 4) {
    pixelBytes[i] = 0; // red
    pixelBytes[i+1] = pixelBytes[i+1]; // green
    pixelBytes[i+2] = pixelBytes[i+2]; // blue
    pixelBytes[i+3] = pixelBytes[i+3]; // alpha
   }

 NSData* newPixelData = [NSData dataWithBytes:pixelBytes length:[pixelData length]]; 
 UIImage* newImage = [UIImage imageWithData:newPixelData];
 EditImageView.image = newImage;
}

here EditImageView is a imageview which has an image. i want to modify image.but this code doesn't seems to work...am i doing something wrong?

Rahul Vyas
Look at the modified "question" - I posted corrected code.
Itay
why needlessly copy pixelBytes[i+1] = pixelBytes[i+1];?
mahboudz
+6  A: 

If you want to save the UIImage back into your user's photo roll there's a built in method for doing this as well.

UIImageWriteToSavedPhotosAlbum( UIImage* image, id target, SEL action, void* userdata);

Here's the signature of the saving finished callback (the action above):

- (void) image:(UIImage*)image didFinishSavingWithError:(NSError *)error contextInfo:(NSDictionary*)info;

You can, of course, omit the saving callback, but saving to the photo roll is non-atomic so you probably want some indicator.

Tetrad
Great pointer! Thanks for adding it.
Itay
A: 

hey it's not working...

here i have write code as you have written above..

-(IBAction)GrayScaleClicked
{
UIImage* image = EditImageView.image;
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
unsigned char *pixelBytes = (unsigned char *)[pixelData bytes];
int length=[pixelData length];
// Take away the red pixel, assuming 32-bit RGBA
for(int i = 0; i < length; i += 4) {
 pixelBytes[i] = 0; // red
    pixelBytes[i+1] = pixelBytes[i+1]; // green
    pixelBytes[i+2] = pixelBytes[i+2]; // blue
   pixelBytes[i+3] = pixelBytes[i+3]; // alpha
}

NSData* newPixelData = [NSData dataWithBytes:pixelBytes length:length]; 
UIImage* newImage = [UIImage imageWithData:newPixelData];
[EditImageView setImage:[UIImage imageWithCGImage:newImage.CGImage]];
[EditImageView setNeedsDisplay];
}

it removes the image from image view and nothing happens.please post checked code

Rahul Vyas
+1  A: 

Apple does not "strip the EXIF data" from images. The thing is, you get the raw image data BEFORE the EXIF data is ever added. EXIF only makes sense in the context of an image format like JPEG or PNG, which you do not have to start with...

The real problem is that when you build PNG or JPG representations, EXIF is not added at that time.

You can however add it yourself - once you have a JPG or PNG, you can write what EXIF you like to it using the iPhone-exif library:

http://code.google.com/p/iphone-exif/

Kendall Helmstetter Gelner
I think we're saying something similar, but yet slightly different. The scenario I'm talking about is when you select an image that was taken with the camera previously, and saved to your photo roll. Those images, as far as I know, do have EXIF data stored in their on-disk PNG/JPG representations. However, when you select such an image, you only get the UIImage, which provides you with no way to get that EXIF data, even if you wanted. As such, for all intents and purposes, the EXIF data is "stripped". Does that clarify what I meant?
Itay
+3  A: 

awesome.. thanks! Your method name for resizing should say targetSize instead of newSize. And, Xcode doesn't like your radians() method. Am I missing something?

cocoaholic
You and me both. But as a quick fix, you can use this formula: float angleRadians = angle * (3.1415927/180.0);
Mustafa
you can also add this #define- #define radians( degrees ) ( degrees * M_PI / 180 )
gnuchu
A: 

Hi Itay, Great Article. I am a little confused about this where all these methods fit in the overall app. Can you please post an example app code? Or if I post you one, will you kindly take a look? THanks! ~prs

prs
You'd use these snippets if you had some code that needed to handle the relevant issues with `UIImagePickerController` or `UIImage`.
Itay
A: 

Thank you for the tips! The modal view is great for user interaction, any idea if there is a programmatic way of accessing images in the album?

Ramesh
No, there is no programmatic access available to the user's photos, due to security restrictions.
Itay
That is what I thought too, there is an app called p ixelpipe that does it, and does it nice. I wonder how they do it.
Ramesh
Pixelpipe just got pulled from the store for doing whatever they were doing.
kubi
As of iOS 4.0 programatic access is now allowed via one of the AV frameworks
Brad Smith
A: 

Love the post. Thanks! I have a question that I feel has a simple answer, but I can't figure it out...I just want to display the camera, but not take a picture. My code runs, but the camera does not show up.

Here's what I've got...

- (void)viewDidLoad 
{   
    [super viewDidLoad];    

    // Create a bool variable "camera" and call isSourceTypeAvailable to see if camera exists on device
    BOOL camera = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera];

    // If there is a camera, then display the world throught the viewfinder
    if(camera)
    {   
        UIImagePickerController *picker = [[UIImagePickerController alloc] init];

        // Since I'm not actually taking a picture, is a delegate function necessary?
        picker.delegate = self;

        picker.sourceType = UIImagePickerControllerSourceTypeCamera;
        [self presentModalViewController:picker animated:YES];

        NSLog(@"Camera is available");
    }

    // Otherwise, do nothing.
    else 
        NSLog(@"No camera available");
}

On the device, the "if" portion runs, in the simulator "else" does, which is expected. But as I said, the camera still doesn't show up. Somebody please help!

Thanks!
Thomas

Thomas
Post that as a separate question. SO does not work like a forum - as answers you should post only info that solve initial problem or add some (hopefully) useful details.
Vladimir
A: 

Hi,

I am using the resize and preserve aspect ratio code but I am noticing an issue with left/right orientation. When you resize an image from the camera source (I tried on 3G and 4), and when the orientation is right or left, the images don't get resized properly (either partially black or cropped).
Any insights into what is happening and how that can be corrected?

Appreciate your response.

Thanks.

kg2010
You need to post this as a separate question of your own. This isn't a forum format. All the answers here should relate directly to the parent.
TechZen
A: 

I'm getting the following error with the last resizing function, at the CGBitmapContentCreate call:

CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 1200 bytes/row.

Here is how the function is called:

//open large image
NSFileManager *fileManager = [NSFileManager defaultManager];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *fullPath = [documentsDirectory stringByAppendingPathComponent:image.fileName];
UIImage *theUIImage = [UIImage imageWithContentsOfFile:fullPath];

//create thumbnail
NSString *thumbFilePath = [fullPath stringByAppendingString:@"-t"];
CGSize thumbSize = CGSizeMake(PROJECT_CELL_HEIGHT, PROJECT_CELL_HEIGHT);
UIImage *thumbImage = [UIImage imageWithImage:theUIImage scaledToSizeWithSameAspectRatio:thumbSize];
NSData *thumbData = UIImagePNGRepresentation(thumbImage);
[fileManager createFileAtPath:thumbFilePath contents:thumbData attributes:nil];

In addition to the error, thumbdata ends up being nil.

Strangely I do not get an error if the same image is returned from a picker. So maybe this has something to do with how the image is being retrieved (imageWithContentsOfFile)?

Thanks for any insight.

blindJesse
Found some help for this issue here:http://stackoverflow.com/questions/2457116/iphone-changing-cgimagealphainfo-of-cgimage
blindJesse
A: 

This is great information, but still have a question...

You know how Mail.app displays users with a choice when sending big images (>1Mb). It displays the "Original" image size + 3 options: Large, Medium, Small.

In all the options, Mail.app displays the size of the image. I could not yet figure out a way to calculate/guess-timate this size before performing the actual resize.

Also, even the size of the Original image only seems available, once you get the JPEG or PNG representation of the image as NSData.

Any suggestions on how we could quickly get the size of the image? - Original size - Before resize?

Thanks,

Rodrigo
You can't. Unless you guessed, and that would be really hard. By the time you calculate how much the size would be, you may as well have done the actual conversion, which is probably what Mail does.
Jasconius
that's the thing... when using Mail.app, the UIActionSheet with the options (Small, Medium, Large, Actual) shows up really quick, but the actual sending of the message happens in background.Thus, I think Mail.app somehow guesses the sizes (and it varies for each picture!), but the actual resizing happens in background.
Rodrigo
I also think there's a possibility that pre-iOS 4, Apple simply cheated and accessed the picture directory directly, and post-iOS 4, this most likely possible using the new Assets library.
Itay