views:

1912

answers:

3

I'm doing some image processing and I need an automatic white balancing algorithm thats not too cpu intensive. Any recommendations?

edit: and if it's relevant to efficiency, I'll be implementing it in java with color images as an array of integers.

+5  A: 

A relatively simple algorithm is to average the hues (in HSV or HSL) of the brightest and darkest pixels on the screen. In a pinch, go with the brightest pixel only. If the hues between brightest and darkest are too different, go with the bright pixel. If the dark is near black go with the bright pixel.

Why even look at the dark pixel? Sometimes the dark is not near black, and hints at the ambient light or fog or haze.

This will make sense to you if you're a heavy Photoshop user. Highlights in a photo are unrelated (or weakly related) to the underlying color of the object. They are your best representation of the color cast of the light, unless the image is so overexposed that everything has overwhelmed the CCDs.

Then adjust the hues of all pixels.

You'll need fast RGB to HSV and HSV to RGB functions. (But maybe you can work in RGB for the pixel corrections with a LUT or linear interpolation.)

You don't want to go by average pixel color or most popular color. That way lies madness.

To quickly find the brightest color (and the darkest one), you can work in RGB, but you should have multipliers for green, red, and blue. On an RGB monitor, 255 green is brighter than 255 red which is brighter than 255 blue. I used to have good multipliers in my head, but alas, they have fled my memory. You can probably google for them.

This will fail in an image which has no highlights. A matte painted wall, for example. But I don't know what you can do about that.


There are many improvements to make to this simple algorithm. You can average multiple bright pixels, grid the image and grab bright and dark pixels from each cell, etc. You'll find some obvious tweaks after implementing the algorithm.

Nosredna
Thanks! I'll give this a try. i've also found a very simple algorithm that GIMP apparently uses for it's automatic white balancing, I'll bench mark the two algorithms and see which one is right for me :)
Charles Ma
+1  A: 

White balancing algorithms are hard. Even digital cameras get the wrong once in a while, even though they know a lot of extra info about the picture - such as whether the flash was used, and the light level.

For starters, I would just average red, green, and blue, and use that as the white balance point. Set limits on it - stay within the ranges for tungsten, flourescent, and daylight. It won't be perfect, but when its wrong, it will be relatively easy to explain why.

Matthias Wandel
A popular first try, but that'll turn a red ball in the snow to a dullish red ball in a sea of cyan. It's worse than no white balance, IMO.
Nosredna
+1  A: 

GIMP apparently uses a very simple algorithm for automatic white balancing. http://docs.gimp.org/en/gimp-layer-white-balance.html

The White Balance command automatically adjusts the colors of the active layer by stretching the Red, Green and Blue channels separately. To do this, it discards pixel colors at each end of the Red, Green and Blue histograms which are used by only 0.05% of the pixels in the image and stretches the remaining range as much as possible. The result is that pixel colors which occur very infrequently at the outer edges of the histograms (perhaps bits of dust, etc.) do not negatively influence the minimum and maximum values used for stretching the histograms, in comparison with Stretch Contrast. Like “Stretch Contrast”, however, there may be hue shifts in the resulting image.

There is a bit more tweaking than is described here since my first attempt at implementing this works seems to work for most photos but other photos seem to have artifacts or contain too much of either red green or blue :/

Charles Ma
Thanks for this link. I had not see that before.
Nosredna
Maybe you could implement multiple algorithms and then try to figure out which algorithm will work when, based on some characteristics of the images. For exposure (a rhather similar problem), modern cameras actually break the image up (plus they know focus point and like @Matthias Wandel pointed out, whether flash was used). Manufacturers have analyzed databases of many photos to see what works. First cameras I remember doing that were Nikon FA and a couple others of that era.
Nosredna
By the way, you can find the areas which are in focus by looking at maximum contrast between adjacent pixels. When a photographer uses flash, the in-focus area is usually relatively close to the camera and might have some nice highlights from the flash.
Nosredna
For the purpose of white balance, you probably care about the in-focus areas more than the out-of focus areas. So you can break the screen into, say 5x5 grid and "grade" the cells by focus and then weight them by importance for your white balance algorithm. I could shut up now if you want. Otherwise I'll probably keep babbling forever
Nosredna
Thanks Norsredna, I'm actually using this for video processing(which is why it needs to be efficient) so flash isn't an issue, but I can probably optimize by only recalculating white balancing parameters every 10 frames or so since lighting doesn't normally change that often. :P
Charles Ma