resizing a camera UIImage returned by the UIImagePickerController takes a ridiculously long time if you do it the usual way as in this post.
[update: last call for creative ideas here! my next option is to go ask Apple, I guess.]
Yes, it's a lot of pixels, but the graphics hardware on the iPhone is perfectly capable of drawing lots of 1024x1024 textured quads onto the screen in 1/60th of a second, so there really should be a way of resizing a 2048x1536 image down to 640x480 in a lot less than 1.5 seconds.
So why is it so slow? Is the underlying image data the OS returns from the picker somehow not ready to be drawn, so that it has to be swizzled in some fashion that the GPU can't help with?
My best guess is that it needs to be converted from RGBA to ABGR or something like that; can anybody think of a way that it might be possible to convince the system to give me the data quickly, even if it's in the wrong format, and I'll deal with it myself later?
As far as I know, the iPhone doesn't have any dedicated "graphics" memory, so there shouldn't be a question of moving the image data from one place to another.
So, the question: is there some alternative drawing method besides just using CGBitmapContextCreate and CGContextDrawImage that takes more advantage of the GPU?
Something to investigate: if I start with a UIImage of the same size that's not from the image picker, is it just as slow? Apparently not...
Update: Matt Long found that it only takes 30ms to resize the image you get back from the picker in [info objectForKey:@"UIImagePickerControllerEditedImage"]
, if you've enabled cropping with the manual camera controls. That isn't helpful for the case I care about where I'm using takePicture
to take pictures programmatically. I see that that the edited image is kCGImageAlphaPremultipliedFirst
but the original image is kCGImageAlphaNoneSkipFirst
.
Further update: Jason Crawford suggested CGContextSetInterpolationQuality(context, kCGInterpolationLow)
, which does in fact cut the time from about 1.5 sec to 1.3 sec, at a cost in image quality--but that's still far from the speed the GPU should be capable of!
Last update before the week runs out: user refulgentis did some profiling which seems to indicate that the 1.5 seconds is spent writing the captured camera image out to disk as a JPEG and then reading it back in. If true, very bizarre.