Say I have a float in the range of [0, 1] and I want to quantize and store it in an unsigned byte. Sounds like a no-brainer, but infact it's quite compliated:
The obvious solution looks like this:
unsigned char QuantizeFloat (float a)
{
return (unsigned char) (a * 255.0f);
}
This works in so far that I get all numbers from 0 to 255, but the distribution of the integers is not even. The function only returns 255 if a is exactly 1.0f. Not a good solution.
If I do proper rounding I just shift the problem:
unsigned char QuantizeFloat (float a)
{
return (unsigned char) (a * 255.0f + 0.5f);
}
Here the the result 0 only cover half of the float-range than any other number.
How do I do a quantization with equal distribution of the floating point range? Ideally I would like to get a equal distribution of integers if I quantize equal distributed random floats.
Any ideas?
Btw: Also my code is in C the problem is language-agnostic. For the non-c guys: Just assume that float to int conversion just truncates the float.
EDIT: Since we had some confusion here: I need a mapping that maps the smallest input float (0) to the smallest unsigned char, and the highest float of my range (1.0) to the highest unsigned byte (255).