views:

33

answers:

2

I am amazed at how well (and fast) this software works. I hovered my phone's camera over a small area of a book cover in dim light and it only took a couple of seconds for Google Shopper to identify it. It's almost magical. Does anyone know how it works?

+1  A: 

Pattern Recognition.

Nicholas Zieve
There's a very nice TED Talk on Pattern Recognition(can't seem to find it), how when applied to computers they can get it to recognize things it had never 'seen' before.
DMin
This is not a good answer.
JimDaniel
A: 

I have no idea how Google Shopper actually works. But it could work like this:

  • Take your image and convert to edges (using an edge filter, preserving color information).
  • Find points where edges intersect and make a list of them (including colors and perhaps angles of intersecting edges).
  • Convert to a rotation-independent metric by selecting pairs of high-contrast points and measuring distance between them. Now the book cover is represented as a bunch of numbers: (edgecolor1a,edgecolor1b,edgecolor2a,edgecolor2b,distance).
  • Pick pairs of the most notable distance values and ratio the distances.
  • Send this data as a query string to Google, where it finds the most similar vector (possibly with direct nearest-neighbor computation, or perhaps with an appropriately trained classifier--probably a support vector machine).

Google Shopper could also send the entire picture, at which point Google could use considerably more powerful processors to crunch on the image processing data, which means it could use more sophisticated preprocessing (I've chosen the steps above to be so easy as to be doable on smartphones).

Anyway, the general steps are very likely to be (1) extract scale and rotation-invariant features, (2) match that feature vector to a library of pre-computed features.

Rex Kerr