I intend to display (4, 8 or 16 bit per channel - no alpha) images on a 1 bit display in an embedded system. Images are stored in RGB tuples. My intention is to use Floyd-Steinburg, as it looks reasonably good, is more than quick enough and concise in code.
In reference to the WikiPedia article, I have two questions.
What would the best practice for expressing nearest colour be? Would the following work? (ignore that I'm returning a structure in c)
typedef rgb16_tag { unsigned short r, g, b } rgb16;
rgb16 nearest_1bit_colour(rgb16 p) {
double c; rgb16 r;
c = ((double)(p.r + p.g + p.b + 3 * (1 << 15))) / ( 3.0 * (1 << 16));
if (c>= 1.0) {
r.r = r.g = r.b = 1;
} else {
r.r = r.g = r.b = 0;
}
return r;
}
and, Is the expression of quantization error done on a per channel basis? i.e. does this make sense?
rgb16 q, new, old, image[X][Y];
int x, y;
... /* (somewhere in the nested loops) */
old = image[x][y];
new = nearest_1bit_colour(old);
/* Repeat the following for each colour channel seperately. */
q.{r,g,b} = old.{r,g,b} - new.{r,g,b};
image[x+1][y].{r,g,b} = image[x+1][y].{r,g,b} + 7/16 * q.{r,g,b}
image[x-1][y+1].{r,g,b} = image[x-1][y+1].{r,g,b} + 3/16 * q.{r,g,b}
image[x][y+1].{r,g,b} = image[x][y+1].{r,g,b} + 5/16 * q.{r,g,b}
image[x+1][y+1].{r,g,b} = image[x+1][y+1].{r,g,b} + 1/16 * q.{r,g,b}