views:

444

answers:

2

I've written a library to match strings against a set of patterns and I can now easily embed lexical scanners into C programs.

I know there are many well established tools available to create lexical scanners (lex and re2c, to just name the first two that come to mind) this question is not about lexers, it's about the best approach to "extend" C syntax. The lexer example is just a concrete case of a general problem.

I can see two possible solutions:

  1. write a preprocessor that converts a source file with the embedded lexer to a plain C file and, possibly, to a set of other files to be used in the compilation.
  2. write a set of C macros to represent lexers in a more readable form.

I've already done both but the question is: "which one would you consider a better practice according the following criteria?"

  • Readability. The lexer logic should be clear and easy to understand
  • Maintainability. Finding and fixing a bug should not be a nightmare!
  • Interference in the build process. The preprocessor will require an additional step in the build process, the preprocessor will have to be in the path etc etc.

In other words, if you had to maintain or write a piece of software that is using one of the two approaches, wich one will disappoint you less?

As an example, here is a lexer for the following problem:

  • Sum all numbers (can be in decimal form including exponential like 1.3E-4.2)
  • Skip strings (double and single quoted)
  • skip lists (similar to LISP lists: (3 4 (0 1)() 3) )
  • stop on encountering the word end (case is irrelevant) or at the end of the buffer

In the two styles.

/**** SCANNER STYLE 1 (preprocessor) ****/
#include "pmx.h"

t = buffer

while (*t) {
  switch pmx(t) { /* the preprocessor will handle this */
    case "&q" :         /* skip strings */
      break; 

    case "&f<?=eE>&F" : /* sum numbers */ 
      sum += atof(pmx(Start,0));
      break;

    case "&b()":        /* skip lists */
      break;

    case "&iend" :      /* stop processing */ 
      t = "";
      break;

    case "<.>":         /* skip a char and proceed */
      break;
  }
}


/**** SCANNER STYLE 2 (macros) ****/
#include "pmx.h"
/* There can be up to 128 tokens per scanner with id x80 to xFF */
#define TOK_STRING x81
#define TOK_NUMBER x82
#define TOK_LIST   x83
#define TOK_END    x84
#define TOK_CHAR   x85

pmxScanner(   /* pmxScanner() is a pretty complex macro */
   buffer
 ,
   pmxTokSet("&q"         , TOK_STRING)
   pmxTokSet("&f<?=eE>&F" , TOK_NUMBER)
   pmxTokSet("&b()"       , TOK_LIST)
   pmxTokSet("&iend"      , TOK_END)
   pmxTokSet("<.>"        , TOK_CHAR)
 ,
   pmxTokCase(TOK_STRING) :   /* skip strings */
     continue; 

   pmxTokCase(TOK_NUMBER) :   /* sum numbers */ 
     sum += atof(pmxTokStart(0));
     continue;

   pmxTokCase(TOK_LIST):      /* skip lists */
     continue;

   pmxTokCase(TOK_END) :      /* stop processing */ 
     break; 

   pmxTokCase(TOK_CHAR) :     /* skip a char and proceed */
     continue;
);

Should anyone be interested in the current implementation, the code is here: http://sites.google.com/site/clibutl .

+3  A: 

The custom preprocessor is the typical approach in parser/interpreter generators, as macros possibilities are very limited and offer potential problems at expansion stage making debugging a tremendous effort.

I suggest you to use a time-tested tool such as the classical Yacc/Lex Unix programs, or if you want to "extend" C, use C++ and Boost::spirit, a parser generator that uses templates extensively.

Hernán
Thanks Hernan. Assuming a set of macros is adequate for the task (like in the example) I'll take your point on debugging.Lexers where just an example, and I need to stay in the C realm.
Remo.D
I wasn't aware that boost was available for C. And the double :: operator. I would really kill to see the boost implementation in C ;).
Trevor Boyd Smith
No, no, Boost is only available for C++!
Hernán
+5  A: 

Preprocessor will offer a more robust and generic solution. Macros on the other hand are quick to whip up, provide a good proof-of-concept and easy when the sample keyword/token space is small. Scaling up/including new features may become tedious with macros after a point. I'd say whip up macros to get started and then convert them to your preprocessor commands.

Also, try to be able to use a generic preprocessor rather than writing your own, if possible.

[...] I would have another dependencies to handle (m4 for Windows, for example).

Yes. But so would any solution you write :) -- and you have to maintain it. Most of the programs you've names have a Windows port available (e.g. see m4 for windows). The advantages of using such a solution is you save a lot of time. Of course, the downside is you probably have to get upto speed with the source code, if and when the odd bug turns up (though the folks maintaining these are very helpful and will certainly make sure you have every help).

And again, yes, I'd prefer a packaged solution to rolling my own.

dirkgently
I could certainly use a generic preprocessor (like gema, m4 or even awk or perl) to create the specific preprocessor but I would have another dependencies to handle (m4 for Windows, for example). Wouldn't it be better to directly provide the preprocessor as part of the package itself?
Remo.D
btw, I get you answer as a +1 for the preprocessor solution :)
Remo.D