I've noticed that there are many questions about how to handle UIImage
objects, especially in conjunction with UIImagePickerController
and then displaying it in a view (usually a UIImageView
). Here is a collection of common questions and their answers. Feel free to edit and add your own.
I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you!
How Do I Select An Image From the User's Images or From the Camera?
You use UIImagePickerController
. The documentation for the class gives a decent overview of how one would use it, and can be found here.
Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want.
My Delegate Was Called - How Do I Get The Media?
The delegate method signature is the following:
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info;
You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example:
UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage];
There are other keys that work as well, all in the documentation.
OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives?
Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage
, they strip it of all the EXIF/Geolocation data.
Can I Get To The Original File Representing This Image on the Disk?
Nope. For security purposes, you only get the UIImage
.
How Can I Look At The Underlying Pixels of the UIImage
?
Since the UIImage
is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this:
UIImage* image = ...; // An image
NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
unsigned char* pixelBytes = (unsigned char *)[pixelData bytes];
// Take away the red pixel, assuming 32-bit RGBA
for(int i = 0; i < [pixelData length]; i += 4) {
pixelBytes[i] = 0; // red
pixelBytes[i+1] = pixelBytes[i+1]; // green
pixelBytes[i+2] = pixelBytes[i+2]; // blue
pixelBytes[i+3] = pixelBytes[i+3]; // alpha
}
However, note that CGDataProviderCopyData
provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS
error if you do). Look at the next question if you want to see how you can modify the pixels.
How Do I Modify The Pixels of the UIImage
?
The UIImage
is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article.
Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage
with the modified pixels:
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory.
After I Select 3 Images From The Camera, I Run Out Of Memory. Help!
You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage
, they become uncompressed. A quick over-the-envelope calculation would be:
width x height x 4 = bytes in memory
That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4
with a 2
.
Now, images taken with the camera are 1600 x 1200
pixels, so let's do the math:
1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB
8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory.
OK, I Understand Why I Have No Memory. What Do I Do?
There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320
pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image?
If the answer is yes, then you should save it to disk for later use.
If the answer is no, then read the next part.
Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image.
OK, I'm Hooked. How Do I Resize the Image?
Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one.
There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each.
Method 1: Using UIKit
+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize;
{
// Create a graphics image context
UIGraphicsBeginImageContext(newSize);
// Tell the old image to draw in this new context, with the desired
// new size
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Get the new image from the context
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
// End the context
UIGraphicsEndImageContext();
// Return the new image.
return newImage;
}
This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However, this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200
pixel image), this is very much an operation you may want to do in the background, on a separate thread.
Method 2: Using CoreGraphics
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize;
{
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, radians(-180.));
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does.
How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)?
It is very similar to the method above, and it looks like this:
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize;
{
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor) {
scaleFactor = widthFactor; // scale to fit height
}
else {
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor < heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
// In the right or left cases, we need to switch scaledWidth and scaledHeight,
// and also the thumbnail point
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, radians(-180.));
}
CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio.
So We've Got Our Scaled Images - How Do I Save Them To Disk?
This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here):
NSData* UIImagePNGRepresentation(UIImage *image);
NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality);
And if you want to use them, you'd do something like:
UIImage* myThumbnail = ...; // Get some image
NSData* imageData = UIImagePNGRepresentation(myThumbnail);
Now we're ready to save it to disk, which is the final step (say into the documents directory):
// Give a name to the file
NSString* imageName = @"MyImage.png";
// Now, we have to find the documents directory so we can save it
// Note that you might want to save it elsewhere, like the cache directory,
// or something similar.
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
// Now we get the full path to the file
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName];
// and then we write it out
[imageData writeToFile:fullPathToFile atomically:NO];
You would repeat this for every version of the image you have.
How Do I Load These Images Back Into Memory?
Just look at the various UIImage
initialization methods, such as +imageWithContentsOfFile:
in the Apple documentation.