views:

682

answers:

2

I am embarking upon a NLP project for sentiment analysis.

I have successfully installed NLTK for python (seems like a great piece of software for this). However,I am having trouble understanding how it can be used to accomplish my task.

Here is my task:

  1. I start with one long piece of data (lets say several hundred tweets on the subject of the UK election from their webservice)
  2. I would like to break this up into sentences (or info no longer than 100 or so chars) (I guess i can just do this in python??)
  3. Then to search through all the sentences for specific instances within that sentence e.g. "David Cameron"
  4. Then I would like to check for positive/negative sentiment in each sentence and count them accordingly

NB: I am not really worried too much about accuracy because my data sets are large and also not worried too much about sarcasm.

Here are the troubles I am having:

  1. All the data sets I can find e.g. the corpus movie review data that comes with NLTK arent in webservice format. It looks like this has had some processing done already. As far as I can see the processing (by stanford) was done with WEKA. Is it not possible for NLTK to do all this on its own? Here all the data sets have already been organised into positive/negative already e.g. polarity dataset http://www.cs.cornell.edu/People/pabo/movie-review-data/ How is this done? (to organise the sentences by sentiment, is it definitely WEKA? or something else?)

  2. I am not sure I understand why WEKA and NLTK would be used together. Seems like they do much the same thing. If im processing the data with WEKA first to find sentiment why would I need NLTK? Is it possible to explain why this might be necessary?

I have found a few scripts that get somewhat near this task, but all are using the same pre-processed data. Is it not possible to process this data myself to find sentiment in sentences rather than using the data samples given in the link?

Any help is much appreciated and will save me much hair!

Cheers Ke

+2  A: 

The movie review data has already been marked by humans as being positive or negative (the person who made the review gave the movie a rating which is used to determine polarity). These gold standard labels allow you to train a classifier, which you could then use for other movie reviews. You could train a classifier in NLTK with that data, but applying the results to election tweets might be less accurate than randomly guessing positive or negative. Alternatively, you can go through and label a few thousand tweets yourself as positive or negative and use this as your training set.

For a description of using Naive Bayes for sentiment analysis with NLTK: http://streamhacker.com/2010/05/10/text-classification-sentiment-analysis-naive-bayes-classifier/

Then in that code, instead of using the movie corpus, use your own data to calculate word counts (in the word_feats method).

ealdent
yep, i ended up on that site after a bit of searching, but i guess im a bit stuck on how to get the statistic for each review. How can I use nltk to give me a list of the review ids with a 1 or 0 for pos/neg? cheers ke
Ke
A: 

Why dont you use WSD. Use Disambiguation tool to find senses. and use map polarity to the senses instead of word. In this case you will get a bit more accurate results as compared to word index polarity.

Kevin
Sounds cool. Do you have any papers or apps mentioning this?
mixdev