views:

180

answers:

2

On an iPhone 3GS, the image captured by the camera has 2048x1536 pixels. If my math is correct, opening this image on a CGLayer will consume 12.5 MB.

Springboard will terminate any application that reaches beyond 12 mb (at least this is what happens to me).

Manipulating this image with a function like CGContextDrawLayer will consume another 12 mb, at least.

This is 24 MB.

How can one manipulate such images on iPhone without having the program terminated?

Is there any way to reduce the footprint of the image taken by the camera without reducing its dimensions?

any clues? thanks.

+1  A: 

Your screen only has 320 x 480 pixels, so putting anything more on the layer seems to be a waste of memory.

So you might as well translate the origin of and scale the original image from 2048 x 1526 pixels down to 320 x 480 pixels, before putting it onto a layer.

If you use a UIScrollView to present the layer, for example, you would write code so that pinching and stretching would calculate a new 320 x 480 pixel representation based on the current zoom level, determined from the frame and bounds of the view. In your code, tapping-and-dragging would translate the origin and recalculate the missing bits.

You can see this effect with Safari, when you zoom into the document. It goes from blurry to sharp as the new view is rendered. Likewise, as you drag the view, its newly missing parts are calculated and added to the view.

Regardless of the touch event, you would probably only want to put a 320 x 480 pixel representation on the layer.

Alex Reynolds
thanks but lets see another example: suppose I have two UIViews acting as a proxy. The user can pinch both, zoom, position, rotating, etc. Now the user clicks on SAVE. I have to save a full 2048x1526 = 12 MB image offscreen. BOOM.. CRASH! BUMPED BY SPRINGBOARD!!!! This is what I am saying.
Digital Robot
just to complement my last comment... I say this because I am doing this right now, I mean, writing the image to a context using CGContextDrawImage and my app is bounced as soon as I use this function. If I use a smaller image, it works but with the image coming from the camera, it crashes. I even tried to write in pieces, 2, 4, 8 pieces.. same result.
Digital Robot
Perform transformations on your view, and then perform those transformations on your original image (which is not put in a layer).
Alex Reynolds
sorry, but can you explain it better? I am newbie with quartz and I am not sure if I understand your point.
Digital Robot
+2  A: 

You should consider using NSInputStream in order to process your image in chunks of whatever size makes sense. For example, you might read 1 MB of data, process it, write results out to an NSOutputStream, and then repeat 11 more times until EOF.

More likely than not, your image processing algorithm will determine the optimal chunk size.

Zack
that's it! thanks!!!!!!!
Digital Robot
Glad to help. Apple provides a nice guide to using NSStream (if you haven't found it already), and there is a good amount of info out there. Happy coding!
Zack