views:

326

answers:

3

The @encode directive returns a const char * which is a coded type descriptor of the various elements of the datatype that was passed in. Example follows:

struct test
{ int ti ;
  char tc ;
} ;

printf( "%s", @encode(struct test) ) ;
// returns "{test=ic}"

I could see using sizeof() to determine primitive types - and if it was a full object, I could use the class methods to do introspection.

However, How does it determine each element of an opaque struct?

+2  A: 

One way to do it would be to write a preprocessor, which reads the source code for the type definitions and also replaces @encode... with the corresponding string literal.

Another approach, if your program is compiled with -g, would be to write a function that reads the type definition from the program's debug information at run-time, or use gdb or another program to read it for you and then reformat it as desired. The gdb ptype command can be used to print the definition of a particular type (or if that is insufficient there is also maint print type, which is sure to print far more information than you could possibly want).

If you are using a compiler that supports plugins (e.g. GCC 4.5), it may also be possible to write a compiler plugin for this. Your plugin could then take advantage of the type information that the compiler has already parsed. Obviously this approach would be very compiler-specific.

mark4o
Keep in mind that if this is the path you choose to go down, your `@encode` preprocessor must work *after* the C preprocessor so that types defined between C preprocessor `#ifdef` macros will also be the ones parsed by the `@encode` preprocessor.
dreamlax
Personally, at least for me, there is a distinction between a 'preprocessor' and a 'parser'. A 'preprocessor' (again, to me) implies something that's "fairly simple" and requires very little syntactically contextual information to work. Implementing something like `@encode()` requires deep, syntactically contextual information that can only be provided by a full blown parser. `@encode()` is nothing like a `cpp` `#define` "search and replace" like macro.
johne
Thanks mark4o! I wanted to split the answer between you and johne but wouldn't let me. I like yours because its more concise but his suggests several possible paths to take to reach the same solution.
Anderson
+2  A: 

You would implement this by implementing the ANSI C compiler first and then add some implementation specific pragmas and functions to it.

Yes i know this is cynical answer and i accept the downvotes.

Lothar
...and I thought that implementing a preprocessor might be seen as too difficult.
mark4o
+6  A: 

@Lothars answer might be "cynical", but it's pretty close to the mark, unfortunately. In order to implement something like @encode(), you need a full blown parser in order to extract the the type information. Well, at least for anything other than "trivial" @encode() statements (i.e., @encode(char *)). Modern compilers generally have either two or three main components:

  • The front end.
  • The intermediate end (for some compilers).
  • The back end.

The front end must parse all the source code and basically converts the source code text in to an internal, "machine useable" form.

The back end translates the internal, "machine useable" form in to executable code.

Compilers that have an "intermediate end" typically do so because of some need: they support multiple "front ends", possibly made up of completely different languages. Another reason is to simplify optimization: all the optimization passes work on the same intermediate representation. The gcc compiler suite is an example of a "three stage" compiler. llvm could be considered an "intermediate and back end" stage compiler: The "low level virtual machine" is the intermediate representation, and all the optimization takes place in this form. llvm also able to keep it in this intermediate representation right up until the last second- this allows for "link time optimization". The clang compiler is really a "front end" that (effectively) outputs llvm intermediate representation.

So, if you want to add @encode() functionality to an 'existing' compiler, you'd probably have to do it as a "source to source" 'compiler / preprocessor'. This was how the original Objective-C and C++ compilers were written- they parsed the input source text and converted it to "plain C" which was then fed in to the standard C compiler. There's a few ways to do this:

Roll your own

  • Use yacc and lex to put together a ANSI-C parser. You'll need a grammar- ANSI C grammar (Yacc) is a good start. Actually, to be clear, when I say yacc, I really mean bison and flex. And also, loosely, the other various yacc and lex like C-based tools: lemon, dparser, etc...
  • Use perl with Yapp or EYapp, which are pseudo-yacc clones in perl. Probably better for quickly prototyping an idea compared to C-based yacc and lex- it's perl after all: Regular expressions, associative arrays, no memory management, etc.
  • Build your parser with Antlr. I don't have any experience with this tool chain, but it's another "compiler compiler" tool that (seems) to be geared more towards java developers. There appears to be freely available C and Objective-C grammars available.

Hack another tool

Note: I have no personal experience using any of these tools to do anything like adding @encode(), but I suspect they would be a big help.

  • CIL - No personal experience with this tool, but designed for parsing C source code and then "doing stuff" with it. From what I can glean from the docs, this tool should allow you to extract the type information you'd need.
  • Sparse - Worth looking at, but not sure.
  • clang - Haven't used it for this purpose, but allegedly one of the goals was to make it "easily hackable" for just this sort of stuff. Particularly (and again, no personal experience) in doing the "heavy lifting" of all the parsing, letting you concentrate on the "interesting" part, which in this case would be extracting context and syntax sensitive type information, and then convert that in to a plain C string.
  • gcc Plugins - Plugins are a gcc 4.5 (which is the current alpha/beta version of the compiler) feature and "might" allow you to easily hook in to the compiler to extract the type information you'd need. No idea if the plugin architecture allows for this kind of thing.

Others

  • Coccinelle - Bookmarked this recently to "look at later". This "might" be able to do what you want, and "might" be able to do it with out much effort.
  • MetaC - Bookmarked this one recently too. No idea how useful this would be.
  • mygcc - "Might" do what you want. It's an interesting idea, but it's not directly applicable to what you want. From the web page: "Mygcc allows programmers to add their own checks that take into account syntax, control flow, and data flow information."

Links.

Edit #1, the bonus links.

@Lothar makes a good point in his comment. I had actually intended to include lcc, but it looks like it got lost along the way.

  • lcc - The lcc C compiler. This is a C compiler that is particularly small, at least in terms of source code size. It also has a book, which I highly recommend.
  • tcc - The tcc C compiler. Not quite as pedagogical as lcc, but definitely still worth looking at.
  • poc - The poc Objective-C compiler. This is a "source to source" Objective-C compiler. It parses the Objective-C source code and emits C source code, which it then passes to gcc (well, usually gcc). Has a number of Objective-C extensions / features that aren't available in gcc. Definitely worth looking at.
johne
Wow, thats a good and long answer. I saved the details for my answer because someone asking such a naive question this is IMHO not able to write a C parser. In your link list you forget about the simple c compilers like LCC or TinyCC. They are better for a beginner. GCC, clang/LLVM are so complex not even an experienced user without a full time job as a compiler write can understand the source code.
Lothar
Anderson