Hi everyone,
Does anyone know how to calculate the error of quantizing from 16bit to 8bit?
I have looked at the Wikipedia article about Quantization, but it doesn't explain this.
Can anyone explain how it is done?
Lots of love, Louise
Update: My function looks like this.
unsigned char quantize(double d, double max) {
return (unsigned char)((d / max) * 255.0);
}