views:

232

answers:

4

There are lots of parsers and lexers for scripts (i.e. structured computer languages). But I'm looking for one which can break a (almost) non-structured text document into larger sections e.g. chapters, paragraphs, etc.

It's relatively easy for a person to identify them: where the Table of Contents, acknowledgements, or where the main body starts and it is possible to build rule based systems to identify some of these (such as paragraphs).

I don't expect it to be perfect, but does any one know of such a broad 'block based' lexer / parser? Or could you point me in the direction of literature which may help?

+1  A: 

Many lightweight markup languages like markdown (which incidentally SO uses), reStructured text and (arguably) POD are similar to what you're talking about. They have minimal syntax and break input down into parseable syntactic pieces. You might be able to get some information by reading about their implementations.

Noufal Ibrahim
I'd forgotten about POD! What I really need looks to be a combination of POD, markdown and reStructured text. They definitely gave me some pointers. It does look as if I'm going to have to build my own.
wilson32
Do you already have text or do you want to start from scratch? Perhaps you can convert your existing document (if you have one) into ReSt or something and use the stock parser?
Noufal Ibrahim
The trouble is we have no idea what the new incoming document will look like. We know the process we envisage will be only semi-automatic. I suspect it will be easier to build a parseable document from a copy of the original which we can then use as a source for any relevant formatter
wilson32
Well then, a lightweight format like the ones I've mentioned would be the right starting point for you. Try to parse with one of them, it will probably complain about syntax errors but will do *some* processing. Good luck!
Noufal Ibrahim
A: 

Most of the lex/yacc kind of programs work with a well defined grammar. if you can define your grammar in terms of a BNF like format (which most of the parsers accept similar syntax) then you can use any of them. That may be stating the obvious. However you can still be a little fuzzy around the 'blocks' (tokens) of text which would be part of your grammar. After all you define the rules for your tokens.

I have used Parse-RecDescent Perl module in the past with varying levels of success for similar projects.

Sorry, it may not be a good answer but more sharing my experiences on similar projects.

Maxwell Troy Milton King
Lucene is an indexer isn't it? Does it really 'parse' anything?
Noufal Ibrahim
You are right. I guess I was more thinking about the kind of functionality 'Lucene Analyzer' would give you.. and maybe assuming too much behind the question as well. Let me know if you think its misleading.
Maxwell Troy Milton King
I was coming to that conclusion, but as a last resort, I asked the question. We may be able to define our documents in some form of BNF, which we can then use to parse them.
wilson32
A: 
  1. Define the annotation standard, which indicates how you would like to break things up.
  2. Go on to Amazon Mechanical Turk and ask people to label 10K documents using your annotation standard.
  3. Train a CRF (which is like an HMM, but better) on this training data.

If you actually want to go this route, I can elaborate on the details. But this will be a lot of work.

Joseph Turian
A: 

try: pygments, geshi, or prettify

They can handle just about anything you throw at them and are very forgiving of errors in your grammar as well as your documents.

References:
gitorius uses prettify,
github uses pygments,
rosettacode uses geshi,

Naveen