views:

134

answers:

2
val uninterestingthings = ".".r
val parser = "(?ui)(regexvalue)".r | (uninterestingthings~>parser)

This recursive parser will try to parse "(?ui)(regexvalue)".r until the end of input. Is in scala a way to prohibit parsing when some defined number of characters were consumed by "uninterestingthings" ?

UPD: I have one poor solution:

object NonRecursiveParser extends RegexParsers with PackratParsers{
  var max = -1
  val maxInput2Consume = 25
  def uninteresting:Regex ={
    if(max<maxInput2Consume){
    max+=1
    ("."+"{0,"+max.toString+"}").r
    }else{
      throw new Exception("I am tired")
    }
  }
  lazy val value = "itt".r
  def parser:Parser[Any] = (uninteresting~>value)|parser
  def parseQuery(input:String) = {
      try{
      parse(parser, input)
      }catch{
          case e:Exception => 
      }
  }
}

Disadvantages:
- not all members are lazy vals so PackratParser will have some time penalty
- constructing regexps on every "uninteresting" method call - time penalty
- using exception to control program - code style and time penalty

+3  A: 

The quick-n-dirty answer is to just limit the number of characters in your regex for uninterestingthings and make it not recursive:

val uninterestingthings = ".{0,60}".r  // 60-chars max
val parser = (uninterestingthings~>"(?ui)(regexvalue)".r)*

Based on the comment about greediness eating the regexvalue, I propose a single regex:

val parser = ("(?.{0,60}?)(?ui)(regexvalue)".r)*

But we seem to have ventured outside the realm of scala parsers into regex minutia. I'd be interested in seeing other results.

Mitch Blevins
It will not work because "uninterestingthings" is greedy and will always consume 60 characters of input
Jeriho
A: 

Use a tokenizer to break things up first, using all of the regexps for interesting things that you already know. Use a single ".".r to match uninteresting things if they're significant to your grammar. (Or throw them away if they're not significant to the grammar.) Your interesting things now have known types, and they get identified by the tokenizer using a different algorithm than the parsing. Since all of the lookahead problems are solved by the tokenizer, the parser should be easy.

Ken Bloom