tags:

views:

525

answers:

6

Hello, I'm looking for feedback on which analyzer to use with an index that has documents from multiple languages. Currently I am using the simpleanalyzer, as it seems to handle the broadest amount of languages. Most of the documents to be indexed will be english, but there will be the occasional double-byte language indexed as well.

Are there any other suggestions or should I just stick with the simpleanalyzer.

Thanks

+1  A: 

SimpleAnalyzer really is simple, all it does is lower-case the terms. I'd have thought that the StandardAnalyzer would give better results than SimpleAnalyzer even with non-english language data. You could perhaps improve it slightly by supplying a custom list of stop words in addition to the default english-language ones.

skaffman
+1  A: 

I've used the StandardAnalyzer with non-English words and it works ok. It even deals with accented characters. If the language is CJK (Chinese, Japanese, Korean), Russian or German it may have problems, but I suspect most of the problems will be related to the stemming of words. If you don't have stemming enabled, it will probably be adequate.

adrianbanks
+4  A: 

From your description, I presume you have document of multiple languages but each document has text in only one language.

For this case, you can use Nutch's language identification to get the language of the document. Then use respective language analyzer to index. To get the correct results for search, you need apply language identification to the search query and use that analyzer.

The upside here is you will be able to use language-specific stemmer & stopwords, pushing the quality of search up. The extra overhead while indexing should be acceptable. The search queries where language identification fails to identify correct language may suffer though. I have used this couple of years back and the results were better than expected.

For CJK, you can apply similar technique but the tools might be different.

Shashikant Kore
I like your suggestions as well, more advanced but I might migrate to this. It offers an excellent balance.
Halirob
A: 

First , you should find that what is your your language ? For example my documents are in english , japanes or persian . you can find that your document language by process on UTF-8 characters .

Then , when you find that your document is in which language , you can analyze it with specific analyzer .

Mehdi Amrollahi
A: 

Purely anecdotal evidence, but we use a (customised, but not in any relevant way) version of StandardAnalyzer for our system. Our documents may not only be in different languages to each other, but documents may contain chunks of different languages (for example, imagine an article written in Japanese with comments in English), so language-sniffing is difficult.

The majority of our documents are in English, but significant numbers are in Chinese and Japanese, with a smaller number in French, Spanish, Portuguese and Korean.

End result? We use StandardAnalyzer, and have very few complaints from people using the system in non-Roman languages about the way our searching works. Our system is somewhat 'enforced' on its users, by the way, so it's not like people are not complaining but moving elsewhere; if they're unhappy, we generally know.

So based on the fact that I'm not swamped with user complaints (very occasional ones, mainly about Chinese, but nothing serious and they're easily explained) it seems to be 'good enough' for many cases.

Cowan
A: 

The correct answer depends on your main language (if any).

For best cross-language IR performance I'd go with a 4/5-grams analyzer, it has shown to work great on many languages. It might even work better than SimpleAnalyzer for English too. See http://www.eecs.qmul.ac.uk/~christof/html/publications/inrt142.pdf for example.

I have looked into this, but from another angle. It seems like there isn't a catch-all analyzer - each language needs its own approach for the best results.

synhershko