views:

205

answers:

4

Grammar by definition contains productions, example of very simple grammar:

E -> E + E
E -> n

I want to implement Grammar class in c#, but I'm not sure how to store productions, for example how to make difference between terminal and non-terminal symbol. i was thinking about:

struct Production
{
   String Left;       // for example E
   String Right;      // for example +
}

Left will always be non-terminal symbol (it's about context-free grammars) But right side of production can contain terminal & non-terminal symbols

So now I'm thinkig about 2 ways of implementation:

  1. Non-terminal symbols will be written using brackets, for example:

    E+E will be represented as string "[E]+[E]"

  2. Create additional data structure NonTerminal

    struct NonTerminal { String Symbol; }

and E+E will be represented as array/list:

[new NonTerminal("E"), "+", new NonTerminal("E")]

but think that there are better ideas, it would be helpfull to hear some response

+2  A: 

here is my idea of storing productions:

Dictionary<NonTerminalSymbol, List<Symbol>>

where

Symbol is parent (abstract?) class for NonTerminalSymbol, TerminalSymbol and Production class

so in yours example that dictionary would have one key ( "E" ) and two values in corresponding list ( "[E]+[E]" and "n" ).

zgorawski
+1 for the object hierarchy, exactly my idea. However, this `Dictionary` encoding is only good for production, not parsing.
larsmans
Um, what if you have two rules with the same nonterminal? (See OP's example rules, for instance). Dictionaries AFAIK only allow one stored item per key.
Ira Baxter
Yep, Ira is right, consider E->E+E, E->n. there is only one entry for left side E in your structure.
dfens
A: 

Maybe it would be helpful to use extension methods for second method:

static class StringEx
{
   public static NonTerminal NonTerminal(this string obj)
   {
       return new NonTerminal(obj);
   }
}

so it would look like this

["E".NonTerminal(), "+", "E".NotTerminal()]

the advantage of this method is that it would be easy to modify code

Ai_boy
+2  A: 

I'd use

 Dictionary<NonTerminalSymbol,Set<List<Symbol>>> 

enabling lookup by Nonterminal of the set of production rule right-hand-sides (themselves represented as lists of Terminal/Nonterminal Symbols) associated with the Nonterminal. (OP's question shows that the Nonterminal E might be associated with two rules, but we only need the right-hand sides if we have the left hand side).

This representation works only for a vanilla BNF grammar definitions, in which there is no syntactic sugar for common grammar-defining idioms. Such idioms typically include choice, Kleene star/plus, ... and when they are avialable in defining the grammar you get an so-called Extended BNF or EBNF. If we write EBNF only allowing choice denoted by |, the Expression grammar in flat form hinted at by OP as an example is:

         E = S ;
         S = P | S + P | S - P ; 
         P = T | P * T | P / T ;
         T = T ** M | ( E ) | Number | ID ;

and my first suggestion can represent this, because the alternation is only used to show different rule right-hand-sides. However, it won't represent this:

         E = S ;
         S = P A* ;
         A = + P | - P ;
         P = T M+ ; -- to be different
         M = * T | / T ;
         T = T ** M | ( E ) | Number | ID | ID ( E  ( # | C) * ) ; -- function call with skipped parameters
         C = , E ;

The key problem that this additional notation introduces is the ability to compose the WBNF operators repeatedly on sub-syntax definitions, and that's the whole point of EBNF.

To represent EBNF, you have to store productions essentially as trees that represent the, well, expression structure of the EBNF (in fact, this is essentially the same problem as representing any expression grammar).

To represent the EBNF (expression) tree, you need to define the tree structure of the EBNF. You need tree nodes for:

  • symbols (terminal or not)
  • Alternation (having a list of alternatives)
  • Kleene *
  • Kleene +
  • "Optional" ?
  • others that you decide your EBNF has as operators (e.g., comma'd lists, a way to say that one has a list of grammar elements seperated by a chosen "comma" character, or ended by a chosen "semicolon" character, ...)

The easiest way to do that is to first write an EBNF grammar for the EBNF itself:

EBNF = RULE+ ;
RULE = LHS "=" TERM* ";" ;
TERM = STRING | SYMBOL | TERM "*" 
       | TERM "+" | ';' STRING TERM | "," TERM STRING 
      "(" TERM* ")" ;

Note that I've added comma'd and semicolon'ed list to the EBNF (extended, remember?)

Now we can simply inspect the EBNF to decide what is needed. What you now need is a set of records (OK, classes for C#'er) to represent each of these rules. So:

  • a class for EBNF that contains a set of rules
  • a class for a RULE having an LHS symbol and a LIST
  • an abstract base class for TERM with several concrete variants, one for each alternative of TERM (a so-called "discriminated union" typically implemented by inheritance and instance_of checks in an OO language).

Note that some of the concrete variants can refer to other class types in the representation, which is how you get a tree. For instance:

   KleeneStar inherits_from TERM {
        T: TERM:
   }

Details left to the reader for encoding the rest.

This raises an unstated problem for the OP: how do you use this grammmar representation to drive parsing of strings?

The simple answer is get a parser generator, which means you need to figure out what EBNF it uses. (In this case, it might simply be easier to store your EBNF as text and hand it to that parser generator, which kind of makes this whole discussion moot).

If you can't get one (?), or want to build one of your own, well, now you have the representation you need to climb over to build it. One other alternative is to build a recursive descent parser driven by this representation to do your parsing. The approach to do that is too large to contain in the margin of this answer, but is straightforward for those with experience with recursion.

EDIT 10/22: OP clarifies that he insists on parsing all context free grammars and "especially NL". For all context free grammars, he will need very a stong parsing engine (Earley, GLR, full backtracking, ...). For Natural Language, he will need parsers much stronger than those; people have been trying to build such parsers for decades with only some, but definitely not easy, success. Either of these two requirements seems to make the discussion of representing the grammar rather pointless; if he does represent a straight context free grammar, it won't parse natural language (proven by those guys trying for decades), and if he wants a more powerful NL parser, he'll need to simply use what the bleeding edge types have produced. Count me a pessimist on his probable success, unless he decides to become a real expert in the area of NL parsing.

Ira Baxter
A: 

not an answer just a comment but can't comment right now:
here is an interesting read from eric lippert about grammars.
starting with
http://blogs.msdn.com/b/ericlippert/archive/2010/04/26/every-program-there-is-part-one.aspx
he starts to implement things in part eight:
http://blogs.msdn.com/b/ericlippert/archive/2010/05/20/every-program-there-is-part-eight.aspx
he seems to have a rather simple approach compared to yours.

hacktick