views:

55

answers:

2

Am i correct in saying a 16 bit image will be decode and drawn faster than a 24 or 32 bit? I know the file size will be less but if the bitmaps will actually be drawn faster than it would be worth the effort to convert them. If it is faster, how would i go about saving a 16 bit jpeg file? I only found an option in photoshop to save a 16 bit bitmap...which is 54 MB.

A: 

I hope you mean the 16-bit, RGB 565 format, yes it should be faster to render than a 24-bit RGB 888 or 32-bit RGB 8888 formats. What you saw in photoshop might be not the same as RGB 565, but RGB with 16-bit per component, which doubles a RGB with 8 bits per pixel.

AFAIK JPEG doesn't support 16-bit formats (eiter 565 or 16-bit per pixel).

Matias Valdenegro
+2  A: 

It depends. If your screen (and thus the surfaces drawn to it) are 16 bit, it can be faster; if they are 32 bit, though, a 32 bit bitmap may be faster.

However if you are looking at this in terms of resources -- which means jpeg or png images -- saying they are "16 bit" or "32 bit" is fairly meaningless. JPEG is a color representation that you will typically expand to 32 bit, though you could also decompress to 16 bit (and probably want to dither while doing so for it to look decent). PNG can store in a lot of representations, depending on the image, and generally picks what is best. Moreover, during packaging, aapt runs through all PNG images and re-compressing them into a final image that is as small as possible, so it may even use a paletted representation if it can.

So how you store images does not really matter that much; what matters is the bitmap that is created when decompressing it at runtime. There are some general rules here:

  • If the image has alpha, it will need to be decompressed to full 32 bit.
  • If the frame buffer and surfaces are 32 bit, the image should be decompressed to 32 bit.
  • If the image doesn't have alpha, it may be either 888 (32 bit) or 565 (16 bit). Picking which to use is... tricky.

Historically the devices we have worked with on the platform have had 16 bit screens, so we've had to deal with the complications from it. The main issue is balancing memory use vs. rendering performance vs. quality. For memory use and rendering performance, 16 bit is best... however, for many images there will be color banding if there is no dithering of the image.

Where to do that dithering is also tricky: ideally you'd do it as part of generating the original image, but that limits what you can do with it (no scaling, which means no use of 9-patches). On the other side, you could do it when rendering, but that means you need to load the image as 32 bit and dither every time it is drawn to the screen. This gives the most flexibility, but has more memory and performance impact.

Now in practice, this actually ends up being a rare issue for the platform -- because almost all images that are used as 9-patches or such also have transparency, they need to be loaded as 32 bit anyway, so it is not too big a deal to dither when rendering.

What this all boils to do is:

  • If your image has transparency, don't worry about it, it will be loaded 32 bit anyway.
  • If your image does not have transparency, you'll need to:
    • Decide whether to dither the original image. This will give better quality on 16 bit screens (you'll do better dithering than the performance critical rendering code), but won't make full use of 32 bit screens.
    • Let the platform decide what to do. This will give good results for both 16 bit and 32 bit screens, but you don't want to scale the image.
    • Load the image yourself, and explicitly control the APIs to decide what format to use and when (or if) to dither.
hackbod
great answer, thank you.
Evan Kimia