views:

28

answers:

2

Hello comunity! ;)

I'm building a web search application (rich application) that is intended to search over some historical documents. Those documents have their own structure. I'm using lucene 3.x to build the search engine, etc.

So far i have built my own Analyzer and a SimpleToken class to fit my needs. So what is the problem?

The problem is that i have three diferent files representing the same Document. One file is the original document,plain text without any markup. The other two are XML marked documents, one represents topographic structure of the document (so it's the original document plus tags to represent itself's structure) and the other represents numbering and columns of the document (once again, the original document plus tags to split the text into pages and columns). It is extremely difficult and confusing to merge those two XML documents into one, the files are really big (over 50.000 lines). The thing is that i really need the information of both xml documents..

Having that said, what do you think is the best approach to index all the stuff? I'm not experienced with lucene, actually it's my first time working around it. First i have to know how i'm going to get the text from the documents (maybe some XML Parser?), and how i'm going to merge the information of the marked documents. Do you think i should create two indexes, one for each marked document and then somehow merge those indexes? i realy need some orientation.

Any help whould be appreciated ;)

Thank You!

A: 
Fabio
A: 

I would work from the reverse direction: what will they be searching for? Presumably, if your xml has something like <domain>blah</domain>, a search for "blah" should weight that higher (because a match in the domain is "worth more" than a match in the body).

However, stuff like the page number? I doubt that anyone is going to do a search in which this matters.

So I would just use the first one, which has markup about the domain (if my assumptions about what is important to users is correct). Tika is a library that is meant to extract data from various file types (including XML) and put it in Lucene.

Xodarap
Thanks for awnsering ;)The page numbers are important because the user may want to confirm if the occurrences are right, i mean, present in the text book. People who work with history want every detail to be displayed :| Domains contain a large amount of text and then trying to find an occurrence in the paper without a clue about the page number is very difficult.Also, the application needs to display the entire book for users and give them option to navigate by page. So, i guess i need to merge both XML so i can do that.What do you think?
Fabio
What you are describing sounds almost like each page should be its own result - e.g. "Matched book X on page Y with text 'blah blah blah...'". Maybe something to consider is indexing each page as its own "document" and then using [field collapsing](http://wiki.apache.org/solr/FieldCollapsing) to merge them into a single book if that is what the user wants?
Xodarap