views:

839

answers:

3

What is the actual difference between LR, SLR, and LALR parsers? I know that SLR and LALR are types of LR parsers, but what is the actual difference as far as their parsing tables are concerned?

And how to show whether a grammar is LR, SLR, or LALR? For an LL grammar we just have to show that any cell of the parsing table should not contain multiple production rules. Any similar rules for LALR, SLR, and LR?

For example, how can we show that the grammar

S --> Aa | bAc | dc | bda
A --> d

is LALR(1) but not SLR(1)?


EDIT (ybungalobill): I didn't get a satisfactory answer for what's the difference between LALR and LR. So LALR's tables are smaller in size but it can recognize only a subset of LR grammars. Can someone elaborate more on the difference between LALR and LR please? LALR(1) and LR(1) will be sufficient for an answer. Both of them use 1 token look-ahead and both are table driven! How they are different?

+4  A: 
  • LR is the most general type of parser you list. It parses input from L eft to right, and produces the R ightmost derivation. With enough tokens of lookahead, an LR parser can theorectically recognise almost any grammar. LR parsers are most general but usually have practical problems in terms of table size for large grammars, because they must use Deterministic Finite Automata.
  • SLR is the same as an LR parser with no lookahead. There is no such thing as an SLR(1) parser, as you mention, because the (1) at the end implies one token of lookahead.
  • LALR parsers are sort of a hybrid between the two. They're LR parsers, but the finite state machine can look ahead one token in order to see which branch to take. This allows the parser to be based on Nondeterministic Finite Automata, which can fit in a much smaller space. They can't recognise everything LR parsers can, but they can get the vast majority, and they're usually much smaller in terms of table size.

how can we show that the grammar ~ is LALR(1) but not SLR?
These definitions conflict:

S --> Aa | dc
A --> d

A D can be reduced to S -> dc, or S -> A -> d. In order to tell, the parser needs one token of lookahead (to check if the lookahead token is a c). Therefore the parser cannot be SLR.

Billy ONeal
What this FSA attached to tables, that LR doesn't have? I've implemented SLR/LALR/LR/GLR parsers, and I don't have one of these. Can you explain this better?
Ira Baxter
@Ira: Agreed -- that didn't make any sense. Is my edit better?
Billy ONeal
@Billy ... er, no. Both the LALR and the LR parsers "look ahead" one token (by checking the input token against lookahead sets). [Technically SLR(1) isn't meaningless, because it can do this too, its just pointless since there can be only one reduction.]. You've added NDFA, but I don't get it. I see the difference as LALR gen merging states that have nonconflicting goto vectors and lookahead sets, or alternatively, LR as producing new states whenever a proposed new state isn't identical to that already is generated. Since LR gen doesn't merge states, it produces (much) bigger state machines.
Ira Baxter
@Ira Baxter: Hmm.. well I've not actually tried to implement any of these things -- I use recursive descent whenever I need to write anything like this. So +1 to your answer, and I would defer to you on this given that you've actually implemented it before. What I know about it is only from cursory reading of the Dragon Book and from a one semester class on the subject.
Billy ONeal
+8  A: 

SLR, LALR and LR parsers can all be implemented using exactly the same table-driven machinery.

Fundamentally, the parsing algorithm collects the next input token T, and consults the current state S (and associated lookahead, GOTO, and reduction tables) to decide what to do:

  • SHIFT: If the current table says to SHIFT on the token T, the pair (S,T) is pushed onto the parse stack, the state is changed according to what the GOTO table says for the current token (e.g, GOTO(T)), another input token T' is fetched, and the process repeats
  • REDUCE: Every state has 0, 1, or many possible reductions that might occur in the state. If the parser is LR or LALR, the token is checked against lookahead sets for all valid reductions for the state. If the token matches a lookahead set for a reduction for grammar rule G = R1 R2 .. Rn, a stack reduction and shift occurs: the semantic action for G is called, the stack is popped n (from Rn) times, the pair (S,G) is pushed onto the stack, the new state S' is set to GOTO(G), and the cycle repeats with the same token T. If the parser is an SLR parser, there is at most one reduction rule for the state and so the reduction action can be done blindly without searching to see which reduction applies. It is useful for an SLR parser to know if there is a reduction or not; this is easy to tell if each state explicitly records the number of reductions associated with it, and that count is needed for the L(AL)R versions in practice anyway.
  • ERROR: If neither SHIFT nor REDUCE is possible, a syntax error is declared.

So, if they all the use the same machinery, what's the point?

The purported value in SLR is its simplicity in implementation; you don't have to scan through the possible reductions checking lookahead sets because there is at most one, and this is the only viable action if there are no SHIFT exits from the state. Which reduction applies can be attached specifically to the state, so the SLR parsing machinery doesn't have to hunt for it. In practice L(AL)R parsers handle a usefully larger set of langauges, and is so little extra work to implement that nobody implements SLR except as an academic exercise.

The difference between LALR and LR has to do with the table generator. LR parser generators keep track of all possible reductions from specific states and their precise lookahead set; you end up with states in which every reduction is associated with its exact lookahead set from its left context. This tends to build rather large sets of states. LALR parser generators are willing to combine states if the GOTO tables and lookhead sets for reductions are compatible and don't conflict; this produces considerably smaller numbers of states, at the price of not be able to distinguish certain symbol sequences that LR can distinguish. So, LR parsers can parse a larger set of languages than LALR parsers, but have very much bigger parser tables. In practice, one can find LALR grammars which are close enough to the target langauges that the size of the state machine is worth optimizing; the places where the LR parser would be better is handled by ad hoc checking outside the parser.

So: All three use the same machinery. SLR is "easy" in the sense that you can ignore a tiny bit of the machinery but it is just not worth the trouble. LR parses a broader set of langauges but the state tables tend to be pretty big. That leaves LALR as the practical choice.

Having said all this, it is worth knowing that GLR parsers can parse any context free language, using more complicated machinery but exactly the same tables (including the smaller version used by LALR). This means that GLR is strictly more powerful than LR, LALR and SLR; pretty much if you can write a standard BNF grammar, GLR will parse according to it. The difference in the machinery is that GLR is willing to try multiple parses when there are conflicts between the GOTO table and or lookahead sets. (How GLR does this efficiently is sheer genius [not mine] but won't fit in this SO post).

That for me is an enormously useful fact. I build program analyzers and code transformers and parsers are necessary but "uninteresting"; the interesting work is what you do with the parsed result and so the focus is on doing the post-parsing work. Using GLR means I can relatively easily build working grammars, compared to hacking a grammar to get into LALR usable form. This matters a lot when trying to deal to non-academic langauges such as C++ or Fortran, where you literally needs thousands of rules to handle the entire language well, and you don't want to spend your life trying to hack the grammar rules to meet the limitations of LALR (or even LR).

As a sort of famous example, C++ is considered to be extremely hard to parse... by guys doing LALR parsing. C++ is straightforward to parse using GLR machinery using pretty much the rules provided in the back of the C++ reference manual. (I have precisely such a parser, and it handles not only vanilla C++, but also a variety of vendor dialects as well. This is only possible in practice because we are using a GLR parser, IMHO).

Ira Baxter
AFAIK C++ can't be parsed with LR because it needs infinite look ahead. So I can't think of any hacks that will make it possible to parse it with LR. Also LRE parsers sound promising.
ybungalobill
GCC used to parse C++ using Bison == LALR. You can always augment your parser with extra goo to handle the cases (lookahead, is-this-a-typename) that give you heartache. The question is "How painful a hack?" For GCC it was pretty painful, but they made it work. That doesn't mean this is recommended, which is my point about using GLR.
Ira Baxter
+2  A: 

LALR parsers merge similar states within an LR grammar to produced parser state tables that are exactly the same size as the equivalent SLR gramnmar, which are usually an order of magnitude smaller than pure LR parsing tables. However, for LR grammars that are too complex to be LALR, these merged states result in parser conflicts, or produce a parser that does not fully recognize the original LR grammar.

BTW, I mention a few things about this in my MLR(k) parsing table algorithm here.

Addendum

The short answer is that the LALR parsing tables are smaller, but the parser machinery is the same. A given LALR grammar will produce much larger parsing tables if all of the LR states are generated, with a lot of redundant (near-identical) states.

The LALR tables are smaller because the similar (redundant) states are merged together, effectively throwing away context/lookahead info that the separate states encode. The advantage is that you get much smaller pasring tables for the same grammar.

The drawback is that not all LR grammars can be encoded as LALR tables, because more complex grammars have more complicated lookaheads, resulting in two or more states instead of a single merged state.

The main difference is that the algorithm to produce LR tables carries more info around between the transitions from state to state, while the LALR algorithm does not. So the LALR algorithm cannot tell if a given merged state should really be left as two or more separate states.

Loadmaster
+1 I like the Honalee idea. My G/L(AL)R parser generator had the seeds of something like this in it; it produces the minimal LALR machine, and then I was going to split states where there were conflicts, but I never carried through. This looks like a nice way to produce an minimal size "LR" like set of parse tables. While it won't help GLR in terms of what it can parse, it may cut the number of parallel parses that GLR has to carry and that would be useful.
Ira Baxter