views:

288

answers:

4

Can anyone all the different techniques used in face detection? Techhniques like neural networks, support vector machines, eigenfaces, etc.

What others are there? Thanks.

A: 

Naive Bayes classifier, which can give you around 90% correctness and it is easy to implement

sza
A: 

If you need it not just as theoretical stuff but you really want to do face detection then I recommend you to find already implemented solutions.

There are plenty tested libraries for different languages and they are widely used for this purpose. Look at this SO thread for more information: Face recognition library.

Roman
Sorry, yeah, it's purely theoretical.
Harry
+1  A: 

An emerging but rather effective approach to the broad class of vision problems, including face detection, is the use of Hierarchical Temporal Memory (HTM), a concept/technology developed by Numenta.

Very loosely speaking, this is a neuralnetwork-like approach. This type of network has a tree shape where the number of nodes decreases significantly at each level. HTM models some of the structural and algorithmic properties of the neocortex. In [possible] departure with the neocortex the classification algorithm implemented at the level of each node uses a Bayesian algorithm. HTM model is based on the memory-prediction theory of brain function and relies heavily on the the temporal nature of inputs; this may explain its ability to deal with vision problem, as these are typically temporal (or can be made so) and also require tolerance for noise and "fuzziness".

While Numemta has produced vision kits and demo applications for some time, Vitamin D recently produced -I think- the first commercial application of HTM technology at least in the domain of vision applications.

mjv
Oooh, that's the method built upon in the book On Intelligence by Jeff Hawkins, right? Thanks, somehow totally forgot about that, despite reading the book. Oops.
Harry
@harry: yes indeed, HTMs or rather the characteristics of the cerebral cortex upon which HTM are based/inspired from, were described by Jeff Hawkins, in his _On Intelligence_ book.
mjv
+1  A: 

Hi Harry, the technique I'm going to talk about is more a machine learning oriented approach; in my opinion is quite fascinating, though not very recent: it was described in the article "Robust Real-Time Face Detection" by Viola and Jones. I used the OpenCV implementation for an university project.

It is based on haar-like features, which consists in additions and subtractions of pixel intensities within rectangular regions of the image. This can be done very fast using a procedure called integral image, for which also GPGPU implementations exist (sometimes are called "prefix scan"). After computing integral image in linear time, any haar-like feature can be evaluated in constant time. A feature is basically a function that takes a 24x24 sub-window of the image S and computes a value feature(S); a triplet (feature, threshold, polarity) is called a weak classifier, because

polarity * feature(S) < polarity * threshold

holds true on certain images and false on others; a weak classifier is expected to perform just a little better than random guess (for instance, it should have an accuracy of at least 51-52%).

Polarity is either -1 or +1.

Feature space is big (~160'000 features), but finite.

Despite threshold could in principle be any number, from simple considerations on the training set it turns out that if there are N examples, only N + 1 threshold for each polarity and for each feature have to be examined in order to find the one that holds the best accuracy. The best weak classifier can thus be found by exhaustively searching the triplets space.

Basically, a strong classifier can be assembled by iteratively choosing the best possible weak classifier, using an algorithm called "adaptive boosting", or AdaBoost; at each iteration, examples which were misclassified in the previous iteration are weighed more. The strong classifier is characterized by its own global threshold, computed by AdaBoost.

Several strong classifiers are combined as stages in an attentional cascade; the idea behind the attentional cascade is that 24x24 sub-windows that are obviously not faces are discarded in the first stages; a strong classifier usually contains only a few weak classifiers (like 30 or 40), hence is very fast to compute. Each stage should have a very high recall, while false positive rate is not very important. if there are 10 stages each with 0.99 recall and 0.3 false positive rate, the final cascade will have 0.9 recall and extremely low false positive rate. For this reason, strong classifier are usually tuned in order to increase recall and false positive rate. Tuning basically involves reducing the global threshold computed by AdaBoost.

A sub-window that makes it way to the end of the cascade is considered a face.

Several sub-window in the initial image, eventually overlapping, eventually after rescaling the image, must be tested.

Bye, hopefully it was interesting ;-)

Dario

damix911