views:

2019

answers:

9

I need an algorithm to determine if a sentence, paragraph or article is negative or positive in tone... or better yet, how negative or positive.

For instance:

Jason is the worst SO user I have ever witnessed (-10)

Jason is an SO user (0)

Jason is the best SO user I have ever seen (+10)

Jason is the best at sucking with SO (-10)

While, okay at SO, Jason is the worst at doing bad (+10)

Not easy, huh? :)

I don't expect somebody to explain this algorithm to me, but I assume there is already much work on something like this in academia somewhere. If you can point me to some articles or research, I would love it.

Thanks.

+4  A: 

This falls under the umbrella of Natural Language Processing, and so reading about that is probably a good place to start.

If you don't want to get in to a very complicated problem, you can just create lists of "positive" and "negative" words (and weight them if you want) and do word counts on sections of text. Obviously this isn't a "smart" solution, but it gets you some information with very little work, where doing serious NLP would be very time consuming.

One of your examples would potentially be marked positive when it was in fact negative using this approach ("Jason is the best at sucking with SO") unless you happen to weight "sucking" more than "best".... But also this is a small text sample, if you're looking at paragraphs or more of text, then weighting becomes more reliable unless you have someone purposefully trying to fool your algorithm.

SoapBox
Thank you. The problem is, the text I am analyzing is not as subtle as my examples. For instance, I want to be able to see if an article is neutral, positive or negative about a subject. Weighting words will not be enough. ;( But, Natural Language Processing is a start. Thanks.
Jason
A: 

It's all about context, I think. If you're looking for the people who are best at sucking with SO. Sucking the best can be a positive thing. For determination what is bad or good and how much I could recommend looking into Fuzzy Logic.

It's a bit like being tall. Someone who's 1.95m can considered to be tall. If you place that person in a group with people all over 2.10m, he looks short.

Sorskoot
+15  A: 

There is a sub-field of natural language processing called sentiment analysis that deals specifically with this problem domain. There is a fair amount of commercial work done in the area because consumer products are so heavily reviewed in online user forums (ugc or user-generated-content). There is also a prototype platform for text analytics called GATE from the university of sheffield, and a python project called nltk. Both are considered flexible, but not very high performance. One or the other might be good for working out your own ideas.

fawce
Awesome, thank you.
Jason
A: 

Maybe essay grading software could be used to estimate tone? WIRED article.
Possible reference. (I couldn't read it.)
This report compares writing skill to the Flesch-Kincaid Grade Level needed to read it!
Page 4 of e-rator says that they look at mispelling and such. (Maybe bad post are misspelled too!)
Slashdot article.

You could also use an email filter of some sort for negativity instead of spam-ness.

waynecolvin
+4  A: 

As pointed out, this comes under sentiment analysis under natural language processing.
Afaik GATE doesn't have any component that does sentiment analysis.
In my experience, I have implemented an algorithm which is an adaptation of the one in the paper 'Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis' by Theresa Wilson, Janyce Wiebe, Paul Hoffmann (this) as a GATE plugin, which gives reasonable good results. It could help you if you want to bootstrap the implementation.

trex279
+2  A: 

Depending on your application you could do it via a Bayesian Filtering algorithm (which is often used in spam filters).

One way to do it would be to have two filters. One for positive documents and another for negative documents. You would seed the positive filter with positive documents (whatever criteria you use) and the negative filter with negative documents. The trick would be to find these documents. Maybe your could set it up so your users effectively rate documents.

The positive filter (once seeded) would look for positive words. Maybe it would end up with words like love, peace, etc. The negative filter would be seeded appropriately as well.

Once your filters are setup, then you run the test text through them to come up with positive and negative scores. Based on these scores and some weighting, you could come up with your numeric score.

Bayesian Filters, though simple, are surprisingly effective.

TAG
This is just a minor issue, but why "two filters"? It's basically a single filter that will be trained (and tested) on positive and negative documents, isn't it?
Yaser Sulaiman
A: 

How about sarcasm:

  • Jason is the best SO user I have ever seen, NOT
  • Jason is the best SO user I have ever seen, right
Osama ALASSIRY
+4  A: 

In my company we have a product which does this and also performs well. I did most of the work on it. I can give a brief idea:

You need to split the paragraph into sentences and then split each sentence into smaller sub sentences - splitting based on commas, hyphen, semi colon, colon, 'and', 'or', etc. Each sub sentence will be exhibiting a totally seperate sentiment in some cases.

Some sentences even if it is split, will have to be joined together.

Eg: The product is amazing, excellent and fantastic.

We have developed a comprehensive set of rules on the type of sentences which need to be split and which shouldn't be (based on the POS tags of the words)

On the first level, you can use a bag of words approach, meaning - have a list of positive and negative words/phrases and check in every sub sentence. While doing this, also look at the negation words like 'not', 'no', etc which will change the polarity of the sentence.

Even then if you can't find the sentiment, you can go for a naive bayes approach. This approach is not very accurate (about 60%). But if you apply this to only sentence which fail to pass through the first set of rules - you can easily get to 80-85% accuracy.

The important part is the positive/negative word list and the way you split things up. If you want, you can go even a level higher by implementing HMM (Hidden Markov Model) or CRF (Conditional Random Fields). But I am not a pro in NLP and someone else may fill you in that part.

For the curious people, we implemented all of this is python with NLTK and the Reverend Bayes module.

Pretty simple and handles most of the sentences. You may however face problems when trying to tag content from the web. Most people don't write proper sentences on the web. Also handling sarcasm is very hard.

cnu
A: 

Ah, I remember one java library for this called LingPipe (commercial license) that we evaluated. It would work fine for the example corpus that is available at the site, but for real data it sucks pretty bad.

cnu