When you see a line like:
#define IX(i,j) ((i)+(N+2)*(j))
is that equivalent to:
int IX(int i, int j) {
return ((i)+(N+2)*(j));
}
How do you know the return type etc?
When you see a line like:
#define IX(i,j) ((i)+(N+2)*(j))
is that equivalent to:
int IX(int i, int j) {
return ((i)+(N+2)*(j));
}
How do you know the return type etc?
#defines
are preprocessor commands. When the code is compiled, IX(i,j) is replaced with its definition everywhere it occurs. You can think of it as a copy-paste operation. Because this happens while the code is being compiled, IX(i,j) returns no type. This lack of type safety is both a feature and a drawback, use it carefully.
No, it's not equivalent. A #define
is a macro, a kind of source code string substitution capability.
Your example of an IX()
function has many attributes relevant to executing code, like having instructions compiled once, having an address, and requiring integer parameters.
The macro takes parameters, but it has no type handling, until the arithmetic is attempted. Even then, any parameters which the preprocessor can evaluate could do something useful. Or it could do something unexpected.
As a handy rule of thumb, the more that macros are used, the less understandable and maintainable the code tends to be.
Yes and no: they don't work the same way, but in this particular case, the effect will be the same if you only use arguments of the type you'd use in the function signature.
Macros are really just text substitution performed before actually compiling the code, so no type information is needed; the macro use is simply replaced with the macro content, and the rest is left to the compiler.
What that means is that IX(i++,j--)
is executed differently depending on whether it's a function or a macro: if it's a macro, the arguments are evaluated when referred to, if it's a function, they're evaluated when the function is called.
Since no parameter is referred to twice, there is no observable difference after each of those has been executed, but they're still treated differently.
As a rule of thumb, if what you're trying to do requires code to be placed at a specific place, then you can use a macro; otherwise you should use a function.
Macros never get seen by the compiler - the preprocessor replaces the text.
So, when you write:
result = IX(5, 3);
The compiler will see this:
result = ((5)+(N+2)*(3));
This can have impacts on behaviour, but it depends on your macro. In this case, not so much (there are also performance and debugging differences, but let's not worry about them here).
Had, for example, you defined your macro like this (note second use of i variable)
#define IX(i,j) ((i)+(i+2)*(j))
And called it like so:
result = IX(++i, j);
Then the macro and function would have different behaviour.
All the answers are correct.
Since macros are handled by the preprocessor before the compiler translates the code to machine instructions they can have the interesting side effects others have mentioned.
I don't entirely agree with the statement that macros reduce code understandability. I think if they are used cautiously, intelligently, and are provably correct they can lead to more understandable (dare I say it?) self-documenting code.
Example:
#define MAX(N1,N2) ((N1) > (N2) ? (N1) : N2))
biggest = MAX(temperatureOne, temperatureTwo);
Yes, I know that MAX(x++, y) would have undesirable side effects but that's where the intelligent use comes in.