I've written a library to match strings against a set of patterns and I can now easily embed lexical scanners into C programs.
I know there are many well established tools available to create lexical scanners (lex and re2c, to just name the first two that come to mind) this question is not about lexers, it's about the best approach to "extend" C syntax. The lexer example is just a concrete case of a general problem.
I can see two possible solutions:
- write a preprocessor that converts a source file with the embedded lexer to a plain C file and, possibly, to a set of other files to be used in the compilation.
- write a set of C macros to represent lexers in a more readable form.
I've already done both but the question is: "which one would you consider a better practice according the following criteria?"
- Readability. The lexer logic should be clear and easy to understand
- Maintainability. Finding and fixing a bug should not be a nightmare!
- Interference in the build process. The preprocessor will require an additional step in the build process, the preprocessor will have to be in the path etc etc.
In other words, if you had to maintain or write a piece of software that is using one of the two approaches, wich one will disappoint you less?
As an example, here is a lexer for the following problem:
- Sum all numbers (can be in decimal form including exponential like 1.3E-4.2)
- Skip strings (double and single quoted)
- skip lists (similar to LISP lists: (3 4 (0 1)() 3) )
- stop on encountering the word end (case is irrelevant) or at the end of the buffer
In the two styles.
/**** SCANNER STYLE 1 (preprocessor) ****/
#include "pmx.h"
t = buffer
while (*t) {
switch pmx(t) { /* the preprocessor will handle this */
case "&q" : /* skip strings */
break;
case "&f<?=eE>&F" : /* sum numbers */
sum += atof(pmx(Start,0));
break;
case "&b()": /* skip lists */
break;
case "&iend" : /* stop processing */
t = "";
break;
case "<.>": /* skip a char and proceed */
break;
}
}
/**** SCANNER STYLE 2 (macros) ****/
#include "pmx.h"
/* There can be up to 128 tokens per scanner with id x80 to xFF */
#define TOK_STRING x81
#define TOK_NUMBER x82
#define TOK_LIST x83
#define TOK_END x84
#define TOK_CHAR x85
pmxScanner( /* pmxScanner() is a pretty complex macro */
buffer
,
pmxTokSet("&q" , TOK_STRING)
pmxTokSet("&f<?=eE>&F" , TOK_NUMBER)
pmxTokSet("&b()" , TOK_LIST)
pmxTokSet("&iend" , TOK_END)
pmxTokSet("<.>" , TOK_CHAR)
,
pmxTokCase(TOK_STRING) : /* skip strings */
continue;
pmxTokCase(TOK_NUMBER) : /* sum numbers */
sum += atof(pmxTokStart(0));
continue;
pmxTokCase(TOK_LIST): /* skip lists */
continue;
pmxTokCase(TOK_END) : /* stop processing */
break;
pmxTokCase(TOK_CHAR) : /* skip a char and proceed */
continue;
);
Should anyone be interested in the current implementation, the code is here: http://sites.google.com/site/clibutl .