views:

326

answers:

2

I am using their default POS tagging and default tokenization..and it seems sufficient. I'd like their default chunker too.

I am reading the NLTK toolkit book, but it does not seem like they have a default chunker?

+2  A: 

You can get out of the box named entity chunking with the nltk.ne_chunk() method. It takes a list of POS tagged tuples:

nltk.ne_chunk([('Barack', 'NNP'), ('Obama', 'NNP'), ('lives', 'NNS'), ('in', 'IN'), ('Washington', 'NNP')])

results in:

Tree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]), ('lives', 'NNS'), ('in', 'IN'), Tree('GPE', [('Washington', 'NNP')])])

It identifies Barack as a person, but Obama as an organization. So, not perfect.

ealdent
What if I am not very concerned about named_entities, but chunking in general. For example, "the yellow dog" is a chunk, and "is running" is a chunk.
TIMEX
Yeah for that, there's no default to my knowledge (though I don't know everything about nltk, to be sure). You could use a RegexpChunkParser, though you'll have to develop the rules yourself. There's an example here: http://gnosis.cx/publish/programming/charming_python_b18.txt
ealdent
+1  A: 

I couldn't find a default chunker/shallow parser either. Although the book describes how to build and train one with example features. Coming up with additional features to get good performance shouldn't be too difficult.

See Chapter 7's section on Training Classifier-based Chunkers.

James Clarke