views:

143

answers:

3

All right, so I have been very frustrated trying to convert a 12-bit buffer to an 8-bit one. The image source is a 12-bit GrayScale (decompressed from JPEG2000) whose color range goes from 0-4095. Now I have to reduce that to 0-255. Common sense tells me that I should simply divide each pixel value like this. But when I try this, the image comes out too light.

void 
TwelveToEightBit(
    unsigned char * charArray,
    unsigned char * shortArray,
    const int num )
{

    short shortValue  = 0; //Will contain the two bytes in the shortArray.
    double doubleValue  = 0; //Will contain intermediary calculations.

    for( int i = 0, j =0; i < num; i++, j +=2 )
    {
        // Bitwise manipulations to fit two chars onto one short.
        shortValue = (shortArray[j]<<8);
        shortValue += (shortArray[j+1]);

        charArray[i] = (( unsigned char)(shortValue/16));
    }
}

Now I can tell that there needs to be some contrast adjustments. Any ideas anyone?

Many Thanks in advance

A: 

Wild guess: your code assumes a big-endian machine (most significant byte first). A Windows PC is little-endian. So perhaps try

  shortValue = (shortArray[j+1]<<8);
  shortValue += (shortArray[j]);

If indeed endiasness is the problem then the code you presented would just shave off the 4 most significant bits of every value, and expand the rest to the intensity range. Hm, EDIT, 2 secs later: no, that was a thinko. But try it anyway?

Cheers & hth.,

– Alf

Alf P. Steinbach
A: 

The main problem, as I understand, is to convert a 12-bit value to a 8-bit one.

Range of 12-bit value = 0 - 4095 (4096 values)
Range of  8-bit value = 0 -  255 ( 256 values)

I would try to convert a 12-bit value x to a 8-bit value y

  1. First, scale down first to the range 0-1, and
  2. Then, scale up to the range 0-256.

Some C-ish code:

uint16_t x = some_value; 
uint8_t  y = (uint8_t) ((double) x/4096 ) * 256;

Update

Thanks to Kriss's comment, I realized that I disregarded the speed issue. The above solution, due to floating operations, might be slower than pure integer operations.

Then I started considering another solution. How about constructing y with the 8 most significant bits of x? In other words, by trimming off the 4 least significant bits.

y = x >> 4;

Will this work?

ArunSaha
I think you are close to or right on the money with this one.
Juan
The original code is already doing something like that in a much more efficient manner and without losing more precision than necessary by rounding (but maybe with an endianness problem ?). Your suggestion have no benefit at all except making the code 10 times slower or worse by performing a floating point division.
kriss
@Juan: Thanks, but I am not sure if that was a encouragement or sarcasm :-). @kriss: Thanks for pointing out the shortcoming. I posted an update, let me know if you have further comments. @Anonymous down voter: I am trying to learn by trying to solve. It will be helpful for me if you provide your comments as @kriss did.
ArunSaha
@ArunSaha: No sarcasm, I felt you touched something when you said to scale the value to the 0-1 range. The rest was redundant but your first statement struck a cord.
Juan
@Juan: Thank you. Does my (updated) answer make sense though?
ArunSaha
@Juan: How did you solve the problem?
ArunSaha
@ArunSaha; every pixel in the image was stored on two bytes, but not on all 16 bits....only on the last 12.....for example:0000 1010 0111 1100That would be a twelve bit pixel. To fit the most significant byte, i would have to create a temporary short, take the first byte:0000 1010 and store it in the short:0000 0000 0000 1010Then I would take the short and shift it to the left 8 bits:short<<80000 1010 0000 0000Then I would add the second byte: 0000 1010 0000 0000+ 0111 1100= 0000 1010 0111 1100Then I shift that short to the right by 4.short>>4, gives me1010 0111
Juan
However, the actual image itself was lighter than expected, but that was just how the image was, and that is what I realized. I just thought that my conversion algorithm was incorrect, but it wasn't.
Juan
@Juan: I went through your example. Thanks. Isn't that same as shifting the original value to the right by 4 bits? Assume `x = 0000 1010 0111 1100`, then `y = (x >> 4 ) = 0000 0000 1010 0111`.
ArunSaha
A: 

In actuality, it was merely some simple Contrast adjustments that needed to be made. I realized this as soon as I loaded up the result image in Photoshop and did auto-contrast....the image result would very closely resemble the expected output image. I found out an algorithm that does the contrast and will post it here for other's convenience:

#include <math.h>

 short shortValue  = 0; //Will contain the two bytes in the shortBuffer.
 double doubleValue  = 0; //Will contain intermediary calculations.

 //Contrast adjustment necessary when converting
 //setting 50 as the contrast seems to be real sweetspot.
 double contrast = pow( ((100.0f + 50.0f) / 100.0f), 2); 

 for ( int i = 0, j =0; i < num; i++, j += 2 )
 {

  //Bitwise manipulations to fit two chars onto one short.
  shortValue = (shortBuffer[j]<<8);
  shortValue += (shortBuffer[j+1]);

  doubleValue = (double)shortValue;

  //Divide by 16 to bring down to 0-255 from 0-4095 (12 to 8 bits)
  doubleValue /= 16;

  //Flatten it out from 0-1
  doubleValue /= 255;
  //Center pixel values at 0, so that the range is -0.5 to 0.5
  doubleValue -= 0.5f;
  //Multiply and just by the contrast ratio, this distances the color
  //distributing right at the center....see histogram for further details
  doubleValue *= contrast;

  //change back to a 0-1 range
  doubleValue += 0.5f;
  //and back to 0-255
  doubleValue *= 255;


  //If the pixel values clip a little, equalize them.
  if (doubleValue >255)
   doubleValue = 255;
  else if (doubleValue<0)
   doubleValue = 0;

  //Finally, put back into the char buffer.
  charBuffer[i] = (( unsigned char)(doubleValue));


 }
Juan
Nice and simple :-). Gives you a linearly stretched contrast enhancement, whereas the gamma thing is exponential. Linear's clearly easier to calculate quickly. Both have some issues with rescaling to avoid blowing highlights / loss in shadows if your image was already spanning the full dynamic range.
Tony
@Tony: You have no idea how worried I was that I would have to implement something as ghastly as fast fourier transform or some other complex algorithm. This is indeed a linear method, but it sounds like the exponential ones might spread out the concentrated pixel values a little better, but because we are dealing with a tight spectrum here, 0-255, it may not be as noticeable. Thanks a lot for all your help! Really appreciated!
Juan
@Juan: it's a tricky space - to get really good results you often need manual tuning - in Photoshop I tend to use multi-point curves adjustments: they do create a smooth curve through the control points you've added and I'm sure the maths is much more involved. Basically, for everywhere you want to increase contrast, you have to accept that you're reducing contrast in some other part of the intensity levels. Having 12 bit input you'll get less obvious banding, but the differences in output intensities are still just as real. But, if what you've got's good enough for your purposes, great!
Tony