views:

103

answers:

1

Hi - i am building a search index that contains special names - containing ! and ? and & and + and ... I have to tread the following searches different:

me & you

me + you

But whatever i do (did try with queryparser escaping before indexing, escaped it manually, tried different indexers...) - if i check the search index with Luke they do not show up (question marks and @-symbols and the like show up)

The logic behind is that i am doing partial searches for a live suggestion (and the fields are not that large) so i split it up into "m" and "me" and "+" and "y" and "yo" and "you" and then index it (that way it is way faster than a wildcard query search (and the index size is not a big problem).

So what i would need is to also have this special wildcard characters be inserted into the index.

This is my code:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using Lucene.Net.Analysis;
using Lucene.Net.Util;

namespace AnalyzerSpike
{
    public class CustomAnalyzer : Analyzer
    {
        public override TokenStream TokenStream(string fieldName, TextReader reader)
        {
            return new ASCIIFoldingFilter(new LowerCaseFilter(new CustomCharTokenizer(reader)));
        }
    }

    public class CustomCharTokenizer : CharTokenizer
    {
        public CustomCharTokenizer(TextReader input) : base(input)
        {

        }

        public CustomCharTokenizer(AttributeSource source, TextReader input) : base(source, input)
        {
        }

        public CustomCharTokenizer(AttributeFactory factory, TextReader input) : base(factory, input)
        {
        }

        protected override bool IsTokenChar(char c)
        {
            return c != ' ';
        }
    }
}

The code to create the index:

private void InitIndex(string path, Analyzer analyzer)
{
    var writer = new IndexWriter(path, analyzer, true);

    //some multiline textbox that contains one item per line:
    var all = new List<string>(txtAllAvailable.Text.Replace("\r","").Split('\n'));

    foreach (var item in all)
    {
        writer.AddDocument(GetDocument(item));
    }

    writer.Optimize();
    writer.Close();
}

private static Document GetDocument(string name)
{
    var doc = new Document();

    doc.Add(new Field(
        "name",
        DeNormalizeName(name),
        Field.Store.YES,
        Field.Index.ANALYZED));

    doc.Add(new Field(
                "raw_name",
                name,
                Field.Store.YES,
                Field.Index.NOT_ANALYZED));

    return doc;
}

(Code is with Lucene.net in version 1.9.x (EDIT: sorry - was 2.9.x) but is compatible with Lucene from Java)

Thx

A: 

Finally had the time to look into it again. And it was some stupid mistake in my denormalice method that did filter out single character parts (as it was in the beginning) and thus it did filter out the plus sign if surrounded by spaces :-/

Thx for your help though Moleski!

private static string DeNormalizeName(string name)
{
    string answer = string.Empty;

    var wordsOnly = Regex.Replace(name, "[^\\w0-9 ]+", string.Empty);
    var filterText = (name != wordsOnly) ? name + " " + wordsOnly : name;

    foreach (var subName in filterText.Split(' '))
    {
        if (subName.Length >= 1)
        {
            for (var j = 1; j <= subName.Length; j++)
            {
                answer += subName.Substring(0, j) + " ";
            }
        }
    }
    return answer.TrimEnd();
}
Eleasar
I'm learning how to do this myself , do you think you could post code for your denormalize method? I'm not clear what that does at that point.Thanks.
Matt
I update my answer - but this is no code that is specific to Lucene - it is just for my special case (and it is just a spike - so don't expect perfect code).If you want plain search - then forget about my denormalization method. It is just used in my special case to speed up live suggestions as a wildcard search is quite expensive for small words - and i want to show suggestions beginning with the first letter or second.
Eleasar