tags:

views:

396

answers:

8

I've been working with a large codebase written primarily by programmers who no longer work at the company. One of the programmers apparently had a special place in his heart for very long macros. The only benefit I can see to using macros is being able to write functions that don't need to be passed in all their parameters (which is recommended against in a best practices guide I've read). Other than that I see no benefit over an inline function.

Some of the macros are so complicated I have a hard time imagining someone even writing them. I tried creating one in that spirit and it was a nightmare. Debugging is extremely difficult, as it takes N+ lines of code into 1 in the a debugger (e.g. there was a segfault somewhere in this large block of code. Good luck!). I had to actually pull the macro out and run it un-macro-tized to debug it. The only way I could see the person having written these is by automatically generating them out of code written in a function after he had debugged it (or by being smarter than me and writing it perfectly the first time, which is always possible I guess).

Am I missing something? Am I crazy? Are there debugging tricks I'm not aware of? Please fill me in. I would really like to hear from the macro-lovers in the audience. :)

A: 

Part of the benefit is code replication without the eventual maintenance cost - that is, instead of copying code elsewhere you create a macro from it and only have to edit it once...

Of course, you could also just make a method to be called but that is sort of more work... I'm against much macro use myself, just trying to present a potential rationale.

Kendall Helmstetter Gelner
+5  A: 

To me the best use of macros is to compress code and reduce errors. The downside is obviously in debugging, so they have to be used with care.

I tend to think that if the resulting code isn't an order of magnitude smaller and less prone to errors (meaning the macros take care of some bookkeeping details) then it wasn't worth it.

In C++, many uses like this can be replaced with templates, but not all. A simple example of Macros that are useful are in the event handler macros of MFC -- without them, creating event tables would be much harder to get right and the code you'd have to write (and read) would be much more complex.

Lou Franco
I can understand some short macros for those reasons, but is there any benefit to having a sanity validation function be declared as a macro vs an inline function? I run into lots of failures inside some X_VALIDATE macros at work, but they comprise multiple assert statements and loops. A core file won't tell me which assert failed (thus making debugging harder). It seems to me this could be an inline function with no cost and lots of benefits. Would you agree?
jdizzle
Main benefit to assert in a macro (rather than inline) is that __FILE__ and __LINE__ will be correct. Since you have a core, you can get the whole stack, so not a big deal for you. If it can be written without a macro, do it without a macro. If it can only be done with a macro and you get a benefit, then do it. For me, the benefit needs to be much simpler code or much less error prone code.
Lou Franco
__FILE__ and __LINE__ are meant to be `__FILE__` and `__LINE__` above (didn't know I needed to escape)
Lou Franco
One of my favorite tricks is to use a macro to be given file and line numbers as parameters to a function, then macro-tize the call to that function ie _mycall(int p1, int p2, char* filename, int lineno) #define mycall(p1, p2) _mycall(p1, p2, \_\_FILE\_\_, \_\_LINE\_\_
jdizzle
It seems to have ruined my ability to post that correctly. The first "mycall" should have been prefixed with an underscore and apparently I don't know how to escape them :)
jdizzle
+4  A: 

If the macros are extremely long, they probably make the code short but efficient. In effect, he might have used macros to explicitly inline code or remove decision points from the run-time code path.

It might be important to understand that, in the past, such optimizations weren't done by many compilers, and some things that we take for granted today, like fast function calls, weren't valid then.

Daniel
+2  A: 

To me, macros are evil. With their so many side effects, and the fact that in C++ you can gain same perf gains with inline, they are not worth the risk.

For ex. see this short macro:

#define max(a, b) ((a)>(b)?(a):(b))

then try this call:

max(i++, j++)

More. Say you have

#define PLANETS 8
#define SOCCER_MIDDLE_RIGHT 8

if an error is thrown, it will refer to '8', but not either of its meaninful representations.

Ariel
Those aren't a good use of macros, as you point out. There are reasons to use them -- I don't believe in evil features, just features used evilly :) -- used correctly, macros are very useful.
Lou Franco
+1  A: 

I only know of two reasons for doing what you describe.

First is to force functions to be inlined. This is pretty much pointless, since the inline keyword usually does the same thing, and function inlining is often a premature micro-optimization anyway.

Second is to simulate nested functions in C or C++. This is related to your "writing functions that don't need to be passed in all their parameters" but can actually be quite a bit more powerful than that. Walter Bright gives examples of where nested functions can be useful.

There are other reasons to use of macros, such as using preprocessor-specific functionality (like including __FILE__ and __LINE__ in autogenerated error messages) or reducing boilerplate code in ways that functions and templates can't (the Boost.Preprocessor library excels here; see Boost.ScopeExit or this sample enum code for examples), but these reasons don't seem to apply for doing what you describe.

Josh Kelley
The inline keyword does not force functions to be inlined, although you're right that often it's a mistake for the programmer to be trying to make those decisions.
Steve Jessop
Fixed. Thanks.
Josh Kelley
A: 

I don't use macros at all. Inline functions serve every useful purpose a macro can do. Macro allow you to do very weird and counterintuitive things like splitting up identifiers (How does someone search for the identifier then?).

A: 

Very long macros will have performance drawbacks, like increased compiled binary size, and there are certainly other reasons for not using them.

For the most problematic macros, I would consider running the code through the preprocessor, and replacing the macro output with function calls (inline if possible) or straight LOC. If the macros exists for compatibility with other architectures/OS's, you might be stuck though.

Dana the Sane
A: 

Debugging is extremely difficult, as it takes N+ lines of code into 1 in the a debugger (e.g. there was a segfault somewhere in this large block of code. Good luck!). I had to actually pull the macro out and run it un-macro-tized to debug it.

Not an answer to your question, but are you doing this part manually? Most compiler should have a switch to generate this automatically for you. For example, in gcc you would pass the -E flag to the preprocessed code.

eduffy
His problem is, as far as i understand, that the debug informations will point him to lines where the macro is used, but where he only sees `MACRO(ARGS);`. Using `gcc -E` will give him the preprocessed file, but that won't help him finding the right code line, unless he re-compiles that file with the gcc -E output, but sadly, that will also have resolved `#include` and everything else.
Johannes Schaub - litb
precisely my point, litb
jdizzle