views:

1003

answers:

3

Hi All, I am looking for a simple java class that can compute tf-idf calculation. I want to do similarity test on 2 documents. I found so many BIG API who used tf-idf class. I do not want to use a big jar file, just to do my simple test. Please help ! Or atlest if some one can tell me how to find TF? and IDF? I will calculate the results :) OR If you can tell me some good java tutorial for this. Please do not tell me for looking google, I already did for 3 days and couldn't find any thing :( Please also do not refer me to Lucene :(

+3  A: 

Term Frequency is the square root of the number of times a term occurs in a particular document.

Inverse Document Frequency is (the log of (the total number of documents divided by the number of documents containing the term)) plus one in case the term occurs zero times -- if it does, obviously don't try to divide by zero.

If it isn't clear from that answer, there is a TF per term per document, and an IDF per term.

And then TF-IDF(term, document) = TF(term, document) * IDF(term)

Finally, you use the vector space model to compare documents, where each term is a new dimension and the "length" of the part of the vector pointing in that dimension is the TF-IDF calculation. Each document is a vector, so compute the two vectors and then compute the distance between them.

So to do this in Java, read the file in one line at a time with a FileReader or something, and split on spaces or whatever other delimiters you want to use - each word is a term. Count the number of times each term appears in each file, and the number of files each term appears in. Then you have everything you need to do the above calculations.

And since I have nothing else to do, I looked up the vector distance formula. Here you go:

D=sqrt((x2-x1)^2+(y2-y1)^2+...+(n2-n1)^2)

For this purpose, x1 is the TF-IDF for term x in document 1.

Edit: in response to your question about how to count the words in a document:

  1. Read the file in line by line with a reader, like new BufferedReader(new FileReader(filename)) - you can call BufferedReader.readLine() in a while loop, checking for null each time.
  2. For each line, call line.split("\\s") - that will split your line on whitespace and give you an array of all of the words.
  3. For each word, add 1 to the word's count for the current document. This could be done using a HashMap.

Now, after computing D for each document, you will have X values where X is the number of documents. To compare all documents against each other is to do only X^2 comparisons - this shouldn't take particularly long for 10,000. Remember that two documents are MORE similar if the absolute value of the difference between their D values is lower. So then you could compute the difference between the Ds of every pair of documents and store that in a priority queue or some other sorted structure such that the most similar documents bubble up to the top. Make sense?

danben
So...let me know if you have any questions or anything.
danben
I told you how to implement this in Java. The only part I didn't say explicitly was that files A and B are more similar than files C and D if the distance between A and B (marked as "D" in my answer) is less than the distance between C and D. Did you read the last part, starting with "So to do this in Java..."? If you have specific questions about it I am more than happy to answer but I feel like you didn't really read it.
danben
Hi danben, Thanks for quick reply. I read your answers and understand all things. But I am unable to convey my questions properly. Let me make it more short.1 - How can i count each word? that how many time it appears in all documents?2- When I will have TF/IDF matrix. How will I know which 2 documents are more similar than others (becoz I will have like 10,000 X,Y metrics and manual verification is impossible? should I use clustoring? If yes then which one?
Ok, I have responded to these questions in the body of the answer - see above.
danben
thank you so much danben. You are so helpfull. Your answer is best !. Just one last question. If tf/idf can tell similarity then why mostly prefer to use clustoring too at the end to see the similarity?
A: 

While you specifically asked not to refer Lucene, please allow me to point to you the exact class. The class you are looking for is DefaultSimilarity. It has an extremely simple API to calculate TF and IDF. See the java code here. Or you could just implement yourself as specified in the DefaultSimilarity documentation.

          TF = sqrt(freq)

and

          IDF = log(numDocs/(docFreq+1)) + 1.

The log and sqrt functions are used to damp the actual values. Using the raw values can skew results dramatically.

Shashikant Kore
Hi Shashikant, Thanks for your reply. Well in my question I said I do not want to use Lucene. Thanks for telling me the exact class. But I am 100% sure, I can not use that class without Lucene :). Anyother tip? Also please read my comments above !
I have given pointer to the java code. You can simply copy-paste tf() and idf() methods in your own class. It doesn't have any other dependencies.
Shashikant Kore
Hi Shashikant, Thanks, I wrote my question in more detail, I hope you can answer me. Thanks for pointing out two methods but I already know this formula type methods. so here are 2 simple questions1 - How can i count each word? that how many time it appears in all documents? 2- When I will have TF/IDF matrix. How will I know which 2 documents are more similar than others (becoz I will have like 10,000 X,Y metrics and manual verification is impossible? should I use clustoring? If yes then which one?
A: 

agazerboy, Sujit Pal's blog post gives a thorough description of calculating TF and IDF. WRT verifying results, I suggest you start with a small corpus (say 100 documents) so that you can see easily whether you are correct. For 10000 documents, using Lucene begins to look like a really rational choice.

Yuval F