Hi, I came across an implementation of the nearest neighbor algorithm for finding matches between certain keypoints in two similar images. The keypoints were generated by the SIFT algorithm. The points are described by a 128 dimension vector, and there are many such points in both images.
The matching algorithm uses the nearest neighbor search and for each point in one image, calculates the corresponding closest point in the other image. The 'closeness' is depicted by minimum euclidean distance between the vectors of the points. The best such matches are selected by taking only those pairs of points whose distance lies below a certain threshold.
However the implementation I came across multiplies all the vectors of the keypoints in one image, with those in the other image, thus forming a matrix of products. It then finds the points whose product is higher than a given threshold.
This implementation gives correct results, but I'd like to know how it works. Does it use correlation between the vectors as the metric or is there something else going on here.