tags:

views:

93

answers:

4

I have a text file containing posts in English/Italian. I would like to read the posts into a data matrix so that each row represents a post and each column a word. The cells in the matrix are the counts of how many times each word appears in the post. The dictionary should consist of all the words in the whole file or a non exhaustive English/Italian dictionary.

I know this is a common essential preprocessing step for NLP. And I know it's pretty trivial to code it, sill I'd like to use some NLP domain specific tool so I get stop-words trimmed etc..

Does anyone know of a tool\project that can perform this task?

Someone mentioned apache lucene, do you know if lucene index can be serialized to a data-structure similar to my needs?

+2  A: 

Maybe you want to look at GATE. It is an infrastructure for text-mining and processing. This is what GATE does (I got this from the site):

  • open source software capable of solving almost any text processing problem
  • a mature and extensive community of developers, users, educators, students and scientists
  • a defined and repeatable process for creating robust and maintainable text processing workflows
  • in active use for all sorts of language processing tasks and applications, including: voice of the customer; cancer research; drug research; decision support; recruitment; web mining; information extraction; semantic annotation
  • the result of a €multi-million R&D programme running since 1995, funded by commercial users, the EC, BBSRC, EPSRC, AHRC, JISC, etc.
  • used by corporations, SMEs, research labs and Universities worldwide
  • the Eclipse of Natural Language Engineering, the Lucene of Information Extraction, the ISO 9001 of Text Mining
Vivin Paliath
+2  A: 

What you want is so simple that, in most languages, I would suggest you roll your own solution using an array of hash tables that map from strings to integers. For example, in C#:

foreach (var post in posts)
{
  var row = new Dictionary<string, int>();

  foreach (var word in GetWordsFromPost(post))
  {
    IncrementContentOfRow(row, word);
  }
}

// ...

private void IncrementContentOfRow(IDictionary<string, int> row, string word)
{
  int oldValue;
  if (!row.TryGet(word, out oldValue))
  {
    oldValue = 0;
  }

  row[word] = oldValue + 1;
}
you're right :-)... still, I was hoping to use some NLP domain specific tools so I get stop words trimmed. I will update my question
LiorH
I think GATE does most of that legwork for you (removing commonly-used words).
Vivin Paliath
@LiorH: Cool. @Vivin Paliath: Agreed, if you want to do more than the question originally stated, then GATE is probably a good way to go.
or you can use this solution but just throw in stop word removal yourself using one of the lists from http://en.wikipedia.org/wiki/Stop_words
ealdent
@ealdent: I would always go that way because it's easier to test-drive code you own than code you don't but I get why the OP wants to go a different way.
A: 

You can check out:

  • bow - a veteran C library for text classification; I know it stores the matrix, it may require some hacking to get it.
  • Weka - a Java machine learning framework that can handle text and build the matrix
  • Sujit Pal's blog post on building the term-document matrix from scratch
  • If you insist on using Lucene, you should create an index using term vectors, and use something like a loop over getTermFreqVector() to get the matrix.
Yuval F
A: 

Thanks to @Mikos' comment, I googled the term "term-document matrix' and found TMG (Text to Matrix Generator).

I found it suitable for my needs.

LiorH