I have a UIImage that is being filled with the image coming from the iPhone camera.

As the image has 2048x1536 pixels, it consumes a big chunk of memory.

  1. I hope this image is 24 bits, rather than 32 bits, but I am not sure. So this is the first question. Is the image coming from the iPhone camera 24 or 32 bits?
  2. The second question is: if the image is 32 bits, how can I make it 24 bits to same memory?
  3. The third question is: is there anything I can do to this image, WITHOUT CHANGING ITS SIZE, before assigning it to a variable to save space?

No solution involving OpenGL, please.

thanks in advance.


(This isn't really an answer, but more of an observation)

The raw data for a picture that size will consume about 12 MB of space, uncompressed.

2048*1536 = 3,145,728 pixels.

Each pixel needs 1 byte of red data, 1 byte of green data, 1 byte of blue data, and 1 byte of transparency = 4 bytes/pixel.

3145728 * 4 = 12,582,912 bytes = exactly 12MB.

For the iPhone that's a big chunk of memory indeed, and I doubt there'd be much you could do to get around that (barring only reading segments of the image at a time from the disk, but depending on your use case, this could really kill the battery).

Dave DeLong
is there any way to remove the alpha from the image coming from the camera to save memory? Who needs alpha in a full screen opaque image?
Digital Robot
+1  A: 

There's not a whole lot you can do without having that picture in memory and creating a second buffer to store the modified version. I believe that second buffer could contain fewer bits per color, and even skipping the alpha value. If you go with 4bits per color, you'd be able to shave off 50% off the original, and that's not including the savings if you got rid of the alpha.

A set of simple calls will tell you what the info on the picture is, such as bits per color and whether it has alpha included:

CGImageAlphaInfo CGImageGetAlphaInfo (
   CGImageRef image

Return Value A CGImageAlphaInfo constant that specifies (1) whether the bitmap contains an alpha channel, (2) where the alpha bits are located in the image data, and (3) whether the alpha value is premultiplied. For possible values, see “Constants.” The function returns kCGImageAlphaNone if the image parameter refers to an image mask.

size_t CGImageGetBitsPerComponent (
   CGImageRef image

Return Value The number of bits used in memory for each color component of the specified bitmap image (or image mask). Possible values are 1, 2, 4, or 8. For example, for a 16-bit RGB(A) colorspace, the function would return a value of 4 bits per color component.

size_t CGImageGetBitsPerPixel (
   CGImageRef image

Return Value The number of bits used in memory for each pixel of the specified bitmap image (or image mask).

This should get you started on seeing what the image is composed of. To recreate it in fewer bits, is a bit more work, and temporarily requires more memory, until after you can discard the original image.

Thanks for your answer! Just one more question. I was examining the pictures coming from the camera. They have no alpha but why in hell are they 32 bpp ???? How do I transform them to 24 bpp?
Digital Robot
I think they are in 32 bpp because it is faster to read the data on word boundaries? The question would then be whether the speed of processing a 24bpp picture is worth the 25% space savings.