views:

79

answers:

2

I currently have python code that compares two texts using the cosine similarity measure. I got the code here.

What I want to do is take the two texts and pass them through a dictionary (not a python dictionary, just a dictionary of words) first before calculating the similarity measure. The dictionary will just be a list of words, although it will be a large list. I know it shouldn't be hard and I could maybe stumble my way through something, but I would like it to be efficient too. Thanks.

+1  A: 

If the dictionary fites in memory, use a Python set:

ok_words = set(["a", "b", "c", "e"])

def filter_words(words):
    return [word for word in words if word in ok_words]

If it doesn't fit in memory, you can use shelve

lazy1
I believe last string must be `return [word for word in words **if** word in ok_words]` ?
Andrei
I updated the typo.
Nick Presta
Sets are not the same thing as dictionaries (though the implementation is similar).
intuited
Thank you. This is the "pythonic" way I was looking for. I'll have to look more into shelve though.
KyleP
Andrei: You're right, my bad (note to self: *run* the code before posting)
lazy1
A: 

The structure you try to create is known as Inverted Index. Here you can find some general information about it and snippets from Heaps and Mills's implementation. Unfortunately, I wasn't able to find it's source, as well as any other efficient implementation. (Please leave comment if you will find any.)

If you haven't a goal to create a library in pure Python, you can use PyLucene - Python extension for accessing Lucene, which is in it's turn very powerful search engine in Java. Lucene implements inverted index and can easily provide you information on word frequency. It also supports wide range of analyzers (parsers + stemmers) for a dozen of languages.
(Also note, that Lucene already has it's own Similarity measure class.)

Some words about similarity and Vector Space Models. It is very powerful abstraction, but your implementation suffers several disadvantages. With a growth of number of documents in your index your co-occurrence matrix will became to big to fit in memory, and searching in it will take a long time. To stop this effect dimension reduction is used. In methods like LSA this is done by Singular Value Decomposition. Also pay attention to such techniques as PLSA, which uses probabilistic theory, and Random Indexing, which is the only incremental (and so the only appropriate for the large indexes) VSM method.

Andrei
Since this topic is not about VSM, I won't give more info about it here, but if you need it, please create new topic and post a comment with it here.
Andrei
Thank you for the links.
KyleP