How could a 32bpp image ( ARGB ) could be converted to a 16bpp image ( ARGB ) using Java's libraries? For my curiosity, at pixel level, what does this conversion do? If I have an int value that holds the value of a pixel ( with all the channels ), how would that int be different after the conversion had happened?
Read in the image and save it in the format you need. From http://www.exampledepot.com/egs/javax.imageio/Graphic2File.html
// Create an image to save
RenderedImage rendImage = myCreateImage();
// Write generated image to a file
try {
// Save as PNG
File file = new File("newimage.png");
ImageIO.write(rendImage, "png", file);
// Save as JPEG
file = new File("newimage.jpg");
ImageIO.write(rendImage, "jpg", file);
} catch (IOException e) {
}
See the output from javax.imageio.ImageIO.getWriterFormatNames() to locate the format you need.
The internal representation of each pixel does not change (except for loss of representation) when using 16 bpp, but the bytes stored on disk will.
A 32-bit AARRGGBB value converted to a 16-bit ARGB value would be something like this:
int argb = ((aarrggbb & 0x000000F0) >> 4)
| ((aarrggbb & 0x0000F000) >> 8)
| ((aarrggbb & 0x00F00000) >> 12)
| ((aarrggbb & 0xF0000000) >> 16);
It sticks everything in the lower 16 bits and leaves the upper 16 bits as 0.
For each channel, you lose the lower 4-bits of colour info, the upper ones being somewhat more important. The colours would be quantized to the nearest 4-bit equivalent value resulting in a visually unpleasant colour banding effect across the image.
Incidentally, 16-bit colour does not normally include an alpha component. Normally (Though not always) it breaks down as 5 bits for red, 6 bits for green (Since our eyes are most sensitive to green/blue colours) and 5 bits for blue.
This conversion would lose only 2 or 3 bits of information on each channel instead of 4, and would assume that the source pixel contained no alpha.