views:

80

answers:

2

I am trying to create a system which would be able to find users with similar favourite movies/books/interests/etc., much like neighbours on last.fm. Users sharing the most mutual interests would have the highest match and would be displayed in user profiles (5 best matches or so).

Is there any reasonably fast way to do this? The obvious solution would be to create a table with user ids and interest ids and compare a user with all the other users, but that would take forever on a table with ... say million users each having 20 interests.

I assume some efficient solution exists, since last.fm is working quite well. I would prefer using some common SQL database like mySQL or pgSQL, but anything would do.

Thanks for your suggestions.


UPDATE:
As it turns out, the biggest problem is finding nearest neighbours in SQL databases, as none of the open-source ones supports this kind of search.
So my solution would be to modify ANN to run as a service and query it from PHP (using sockets for instance) - having even millions of users with say 7 dimensions in memory is not so much of a big deal and it runs unbelievably fast.

Another solution for smaller datasets is this simple query:

SELECT b.user_id, COUNT(1) AS mutual_interests
FROM `users_interests` a JOIN `users_interests` b ON (a.interest_id = b.interest_id)
WHERE a.user_id = 5 AND b.user_id != 5
GROUP BY b.user_id ORDER BY mutual_interests DESC, b.user_id ASC

20-50ms with 100K users each having ~20 interests (of 10 000 possible interests) on average

A: 

You want to solve the approximate nearest neighbor problem. Encode users characteristics as a vector in some space, and then find approximately the nearest other user in that space.

What space exactly, and what distance metric you want to use are probably things to evaluate experimentally based on your data. Fortunately, there is a C++ package you can use to solve this problem with various metrics and algorithms to fit your needs: http://www.cs.umd.edu/~mount/ANN/

Edit: Its true that the running time here is dependent on the number of features. But there is a handy theorem in high-dimensional geometry that says that if you have n points in arbitrarily high dimensions, and you only care about approximate distances, you can project them down into O(log n) dimensions without loss. See here (http://en.wikipedia.org/wiki/Johnson-Lindenstrauss_lemma). (A random projection is performed by multiplying your points by a random +1/-1 valued matrix). Note that log(1,000,000) = 6, for example.

Aaron
Thanks, encoding characteristics as a special vector seems like a good idea.However, this ANN library (and probably any C++ approach) would require to hold the entire users/interests table in memory, which would be a little too expensive, plus the authors claim that it performs well only with "thousands to hundreds of thousands, and in dimensions as high as 20", but it's likely that there would be tens of thousands of dimesnions (just imagine how many films exist).
81403
Actually, you can project onto a much smaller dimension to solve this problem. Let me update my answer to point you to the relevant theorem.
Aaron
Ah, now that explains the mystery :)One more question - adding new interests/dimensions would also require to rebuild the reduced dimensions, right? (at least from time to time)
81403
Yes, you'd have to update the projection, and slowly increase the dimensionality as you added features.
Aaron
A: 

I would recommend you the book Programming Collective Intelligence which is actually dedicated to problems like yours (Clustering is discussed in Chapter 3).

Briefly, you should calculate Tanimoto coefficient for each users pair which is the ratio of the intersection set (only the interests that are in both sets) to the union set (all the interests in either set). This is defined for two vectors like this:

  def tanamoto(v1,v2):
    c1,c2,shr=0,0,0
    for i in range(len(v1)):
      if v1[i]!=0: c1+=1 # in v1
      if v2[i]!=0: c2+=1 # in v2
      if v1[i]!=0 and v2[i]!=0: shr+=1 # in both
    return 1.0-(float(shr)/(c1+c2-shr))

This will return a value between 1.0 and 0.0. A value of 1.0 indicates that nobody who wants the first item wants the second one, and 0.0 means that exactly the same set of people want the two items.

UPDATE (answer to the comment): Yes, it is assumed that you will store all user's neighbors. Also, you should take in account that when a user changes his interests the distance from other users to him also need to be refreshed, so I suggest to make this calculation by schedule rather than by event and make it scalable. Furthermore, to speed it up you can try binary algorithms, see http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer

Vitalii Fedorenko
Thanks, I'll check out the book.So, if I understand this correctly, I would have to build (and maintain) some sort of cache of similarities between users, right?That would mean if a user changed his interests, I would have to crawl through all the users and calculate their similarities which takes ~2 seconds on my testing database with 10000 users and 25 interests per user on average.
81403
Well she/he doesn't *need* to. That's an option :-)
Mau
@Mau right, fixed
Vitalii Fedorenko
81403
@81403 Actually, you are not forced to run it using SQL, it can be implemented in Java or any other language used by your application
Vitalii Fedorenko