views:

67

answers:

1

Hi all, I have been working on a project about sentence similarity. I know it has been asked many times in SO, but I just want to know if my problem can be accomplished by the method I use by the way that I am doing it, or I should change my approach to the problem. Roughly speaking, the system is supposed to split all sentences of an article and find similar sentences among other articles that are fed to the system.

I am using cosine similarity with tf-idf weights and that is how I did it.

1- First, I split all the articles into sentences, then I generate trigrams for each sentence and sort them(should I?).

2- I compute the tf-idf weights of trigrams and create vectors for all sentences.

3- I calculate the dot product and magnitude of original sentence and of the sentence to be compared. Then calculate the cosine similarity.

However, the system does not work as I expected. Here, I have some questions in my mind.

As far as I have read about tf-idf weights, I guess they are more useful for finding similar "documents". Since I am working on sentences, I modified the algorithm a little by changing some variables of the formula of tf and idf definitions(instead of document I tried to come up with sentence based definition).

tf = number of occurrences of trigram in sentence / number of all trigrams in sentence

idf = number of all sentences in all articles / number of sentences where trigram appears

Do you think it is ok to use such a definition for this problem?

Another one is that I saw the normalization is mentioned many times when calculating the cosine similarity. I am guessing that this is important because the trigrams vectors might not be the same size(which they rarely are in my case). If a trigram vector is size of x and the other is x+1, then I treat the first vector as it was the size of x+1 with the last value is being 0. Is this what it is meant by normalization? If not, how do I do the normalization?

Besides these, if I have chosen the wrong algorithm what else can be used for such problem(preferably with n-gram approach)?

Thank you in advance.

+1  A: 

I am not sure why you are sorting the trigrams for every sentence. All you need to care about when computing cosine similarity is that whether the same trigram occurred in the two sentences or not and with what frequencies. Conceptually speaking you define a fixed and common order among all possible trigrams. Remember the order has to be the same for all sentences. If the number of possible trigrams is N, then for each sentence you obtain a vector of dimensionality N. If a certain trigram does not occur, you set the corresponding value in the vector to zero. You dont really need to store the zeros, but have to take care of them when you define the dot product.

Having said that, trigrams are not a good choice as chances of a match are a lot sparser. For high k you will have better results from bags of k consecutive words, rather than k-grams. Note that the ordering does not matter inside a bag, its a set. You are using k=3 k-grams, but that seems to be on the high side, especially for sentences. Either drop down to bi-grams or use bags of different lengths, starting from 1. Preferably use both.

I am sure you have noticed that sentences that do not use the exact trigram has 0 similarity in your method. K-bag of words will alleviate the situation somewhat but not solve it completely. Because now you need sentences to share actual words. Two sentences may be similar without using the same words. There are a couple of ways to fix this. Either use LSI(latent Semantic Indexing) or clustering of the words and use the cluster labels to define your cosine similarity.

In order to compute the cosine similarity between vectors x and y you compute the dot product and divide by the norms of x and y. The 2-norm of the vector x can be computed as square root of the sum of the components squared. However you should also try your algorithm out without any normalization to compare. Usually it works fine, because you are already taking care of the relative sizes of the sentences when you compute the term frequencies (tf).

Hope this helps.

srean
@Ahmet If there is anything that you want me to clarify, let me know.
srean
Thank you for your answer. First, the reason that I sort the vector is that I get better results. I tried what you suggest but there is no luck. But I just realized something that the similar words are generally the ones with similar length. This cosine similarity looks a little random to me since we are not checking the connection between the n-grams, instead we are checking the frequency of n-grams without considering what they are. Maybe still, I am missing something.
Ahmet Keskin
Of course cosine similarity will look random if you do not take care they match or not, because what you are computing in that case has nothing to do with cosine similarity. You are doing it wrong and in this case it _will_ be random by definition. Give it another try and follow the instructions closely, it will work.
srean
Hi, thanks for the advice. Now it is working. That was an important detail that I should have taken care of. Actually the way that I implemented the algorithm was wrong and that was the answer I was looking for. The algorithm that I read somewhere(I don't really remember where exactly) did not mention the matching part. So, I was kind of waiting for something magical because of the sorting step. Then I considered using Dice Coefficient, because it made much more sense. Now, cosine similarity works fine. Thank you again.
Ahmet Keskin
Happy to have helped
srean