views:

290

answers:

7

Here's the problem. I have a bunch of large text files with paragraphs and paragraphs of written matter. Each para contains references to a few people (names), and documents a few topics (places, objects).

How do I data mine this pile to assemble some categorised library? ... in general, 2 things.

  1. I don't know what I'm looking for, so I need a program to get the most used words/multiple words ("Jacob Smith" or "bluewater inn" or "arrow").

  2. Then knowing the keywords, I need a program to help me search for related paras, then sort and refine results (manually by hand).

+1  A: 

Looks like you're trying to create an index?

I think Learning Perl has information on finding the frequency of words in a text file, so that's not a particularly hard problem.

But do you really want to know that "the" or "a" is the most common word?

If you're looking for some kind of topical index, the words you actually care about are probably down the list a bit, intermixed with more words you don't care about.

You could start by getting rid of "stop words" at the front of the list to filter your results a bit, but nothing would beat associating keywords that actually reflect the topic of the paragraphs, and that requires context.

Anyway, I could be off base, but there you go. ;)

John at CashCommons
+1  A: 

The problem with what you ask is that you don't know what you're looking for. If you had some sort of weighted list of terms that you cared about, then you'd be in good shape.

Semantically, the problem is twofold:

  • Generally the most-used words are the least relevant. Even if you use a stop-words file, a lot of chaff remains
  • Generally, the least-used words are the most relevant. For example, "bluewater inn" is probably infrequent.

Let's suppose that you had something that did what you ask, and produced a clean list of all the keywords that appear in your texts. There would be thousands of such keywords. Finding "bluewater inn" in a list of 1000s of terms is actually harder than finding it in the paragraph (assuming you don't know what you're looking for) because you can skim the texts and you'll find the paragraph that contains "bluewater inn" because of its context, but you can't find it in a list because the list has no context.

Why don't you talk more about your application and process and then perhaps we can help you better??

Rip Rowan
+1  A: 

I think what you want to do is called "entity extraction". This Wikipedia article has a good overview and a list of apps, including open source ones. I used to work on one of the commercial tools in the list, but not in a programming capacity, so I can't help you there.

Patrick Cuff
+2  A: 

Hi Jeremy, Your question is a tiny bit open-ended :) Chances are, you will find modules for whatever analysis you want to do in the UIMA framework:

Unstructured Information Management applications are software systems that analyze large volumes of unstructured information in order to discover knowledge that is relevant to an end user. An example UIM application might ingest plain text and identify entities, such as persons, places, organizations; or relations, such as works-for or located-at. UIMA is made of many things

UIMA enables applications to be decomposed into components, for example "language identification" => "language specific segmentation" => "sentence boundary detection" => "entity detection (person/place names etc.)". Each component implements interfaces defined by the framework and provides self-describing metadata via XML descriptor files. The framework manages these components and the data flow between them. Components are written in Java or C++; the data that flows between components is designed for efficient mapping between these languages.

You may also find Open Calais a useful API for text analysis; depending on how big your heap of documents is, it may be more or less appropriate.

If you want it quick and dirty -- create an inverted index that stores all locations of words (basically a big map of words to all file ids in which they occur, paragraphs in those files, lines in the paragraphs, etc). Also index tuples so that given a fileid and paragraph you can look up all the neighbors. This will do what you describe, but it takes quite a bit of tweaking to get it to pull up meaningful correlations (some keywords to start you off on your search: information retrieval, TF-IDF, Pearson correlation coefficient).

SquareCog
Sounds awesome :D .. I'll check it out and let you know, reply to comments on the answer, okay?!
Jenko
+1  A: 

Ned Batchelder gave a great talk at DevDays Boston about Python.

He presented a spell-corrector written in Python that does pretty much exactly what you want.

You can find the slides and source code here: http://nedbatchelder.com/text/devdays.html

Stefan
A: 
Dracoder
+1  A: 

I recommend that you have a look at R. In particular, look at the tm package. Here are some relevant links:

More generally, there are a large number of text mining packages on the Natural Language Processing view on CRAN.

Shane