views:

375

answers:

4

Hi,

I'm writing a small interpreter for a simple BASIC like language as an exercise on an AVR microcontroller in C using the avr-gcc toolchain. However, I'm wondering if there are any open source tools out there that could help me writing the lexer and parser.

If I would write this to run on my Linux box, I could use flex/bison. Now that I restricted myself to an 8-bit platform I have to do it all by hand, or not?

Regards, Johan

+4  A: 

You can use flex/bison on Linux with its native gcc to generate the code that you will then cross-compile with your AVR gcc for the embedded target.

Paul R
+1  A: 

GCC can cross-compile to a variety of platforms, but you run flex and bison on the platform you're running the compiler on. They just spit out C code that the compiler then builds. Test it to see how big the resulting executable really is. Note that they have run time libraries (libfl.a etc.) that you will also have to cross compile to your target.

ConcernedOfTunbridgeWells
I still have to investigate the size of those libraries and that is why I asked the question in the first place. I want something specifically targeted towards small MCUs.
Johan
+5  A: 

If you are tight on space, you should hand-code a recursive descent parser; these are essentially LL(1) parsers. This is especially effective for langauges which are as "simple" as Basic. (I did several of these back in the 70s!). The good news is these don't contain any library code; just what you write.

They are pretty easy to code, if you already have a grammar. First, you have to get rid of left recursive rules (e.g., X = X Y ). This is generally pretty easy to do, so I leave it as an exercise.

Then if you have BNF rule of the form:

 X = A B C ;

create a subroutine for each item in the rule (X, A, B, C) that returns a boolean saying "I saw the corresponding syntax construct". For X, code:

subroutine X()
     if ~(A()) return false;
     if ~(B()) { error(); return false; }
     if ~(C()) { error(); return false; }
     // insert semantic action here: generate code, do the work, ....
     return true;
end X;

Similarly for A, B, C.

If a token is a terminal, write code that checks the input stream for the string of characters that makes up the terminal. E.g, for a Number, check that input stream contains digits and advance the input stream cursor past the digits. This is especially easy if you are parsing out of a buffer (for BASIC, you tend to get one line at time) by simply advancing or not advancing a buffer scan pointer. This code is essentially the lexer part of the parser.

If your BNF rule is recursive... don't worry. Just code the recursive call. This handles grammar rules like:

T  =  '('  T  ')' ;

This can be coded as:

subroutine T()
     if ~(left_paren()) return false;
     if ~(T()) { error(); return false; }
     if ~(right_paren()) { error(); return false; }
     // insert semantic action here: generate code, do the work, ....
     return true;
end X;

If you have a BNF rule with an alternative:

 P = Q | R ;

then code P with alternative choices:

subroutine P()
    if ~(Q)
        {if ~(R) return false;
         return true;
        }
    return true;
end;

Sometimes you'll encounter list forming rules. These tend to be left recursive, and this case is easily handled. Example:

L  =  A |  L A ;

You can code this as:

subroutine L()
    if ~(A()) then return false;
    while (A()) do // loop
    return true;
end;

You can code several hundred grammar rules in a day or two this way. There's more details to fill in, but the basics here should be more than enough.

If you are really tight on space, you can build a virtual machine that implements these ideas. That's what I did back in 70s, when 8K 16 bit words was what you could get.

Ira Baxter
Yeah, it isn't too hard to hand roll a recursive descent parser for a simple language. Remember to optimize tail calls when you can -- stack space matters a lot when you've only got a couple kilobytes of RAM.
Steve S
@Ira -- you might want to add something about implementing a lexer/scanner from scratch also (since the OP asked about flex). It isn't quite as easy/elegant to do by hand as a parser, but I think it deserves mention.
Steve S
@Steve: noted that the terminal scanning === lexer part.
Ira Baxter
All: yes, you can do tail call optimization. This won't matter unless you expect nesting in your parsed code to get really deep; for a BASIC code line its pretty hard to find expressions much more than 10 parathenses deep, and you can always put in a depth limit count to boot. It is true that embedded systems tend to have less stack space, so at least pay attention to your choice here.
Ira Baxter
Ah, I skimmed too quickly and missed that. I might have to write an answer of my own that describes another way to implement it. Nice answer, though!
Steve S
+4  A: 

I've implemented a parser for a simple command language targeted for the ATmega328p. This chip has 32k ROM and only 2k RAM. The RAM is definitely the more important limitation -- if you aren't tied to a particular chip yet, pick one with as much RAM as possible. This will make your life much easier.

At first I considered using flex/bison. I decided against this option for two major reasons:

  • By default, Flex & Bison depend on some standard library functions (especially for I/O) that aren't available or don't work the same in avr-libc. I'm pretty sure there are supported workarounds, but this is some extra effort that you will need to take into account.
  • AVR has a Harvard Architecture. C isn't designed to account for this, so even constant variables are loaded into RAM by default. You have to use special macros/functions to store and access data in flash and EEPROM. Flex & Bison create some relatively large lookup tables, and these will eat up your RAM pretty quickly. Unless I'm mistaken (which is quite possible) you will have to edit the output source in order to take advantage of the special Flash & EEPROM interfaces.

After rejecting Flex & Bison, I went looking for other generator tools. Here are a few that I considered:

You might also want to take a look at Wikipedia's comparison.

Ultimately, I ended up hand coding both the lexer and parser.

For parsing I used a recursive descent parser. I think Ira Baxter has already done an adequate job of covering this topic, and there are plenty of tutorials online.

For my lexer, I wrote up regular expressions for all of my terminals, diagrammed the equivalent state machine, and implemented it as one giant function using goto's for jumping between states. This was tedious, but the results worked great. As an aside, goto is a great tool for implementing state machines -- all of your states can have clear labels right next to the relevant code, there is no function call or state variable overhead, and it's about as fast as you can get. C really doesn't have a better construct for building static state machines.

Something to think about: lexers are really just a specialization of parsers. The biggest difference is that regular grammars are usually sufficient for lexical analysis, whereas most programming languages have (mostly) context-free grammars. So there's really nothing stopping you from implementing a lexer as a recursive descent parser or using a parser generator to write a lexer. It's just not usually as convenient as using a more specialized tool.

Steve S