views:

26

answers:

0

Hello, I am writing software that thresholds 24-bit color JPEGs.

Currently the system uses a threshold value that I have calculated manually through trial and error using 1 sample of each document type.

This method works but is prone to error if there is significant dirt/smudges on the document.

The software uses IPP for decoding the image and I use a hand-written thresholding code block to binarize the grayscale image.

I want to move from a pre-defined threshold per document type, to calculating a threshold value using an intensity Histogram per image. I wrote a sample java program that uses the Histogram and the "Minimum Error method" gives very good results in all the samples I have thrown against it.

I know how to calculate the intensity histogram but I would like to find some explanation or sample source code or pseudo code explaining the Minimum Error algorithm that is applied against the intensity histogram. Anyone have any resources? Is "Minimum Error method" referenced by another name that is more popular that I should search under?

I am surprised I could not find anything under OpenCV for such an algorithm, but again, I may be searching under the wrong nomenclature.

Thank you very much