views:

1110

answers:

4

I have a couple of questions regarding the following snippet:

#include<stdio.h>

#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23,34,12,17,204,99,16};

int main()
{
int d;

for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);

return 0;
}

Here the output of the code does not print the array elements as expected. But when i add a typecast of (int) the the macro definition of ELEMENTS as

 #define TOTAL_ELEMENTS (int) (sizeof(array) / sizeof(array[0]))

It displays all array elements as expected.

  • How does this typecast work?

Based on this I have few questions:

  • Does it mean if I have some macro definition as:

    #define AA (-64)

by default in C, all constants defined as macros are equivalent to signed int.

If yes, then

  • But if I have to forcibly make some constant defined in a macro behave as an unsigned int is there any constant suffix than I can use (I tried UL, UD neither worked)?

  • How can I define a constant in a macro definition to behave as unsigned int?

+6  A: 

sizeof returns the number of bytes in unsigned format. That's why you need the cast.

See more here.

Skurmedel
@Skurmedel: Thats fine. But if i define a macro as #define bb (-64*(sizeof(int)/sizeof(int)))and then use bb, it is behaving as signed int, i thought it should have behaved as unsigned int , since in the macro definition of bb there is a multiplication of a signed int with a unsigned int so result should be unsigned int(C promotion rules). Am i incorrect or missing something?
goldenmean
According to the C99 standard (6.3.1.8): Otherwise, if the type of the operand with signed integer type can representall of the values of the type of the operand with unsigned integer type, thenthe operand with unsigned integer type is converted to the type of theoperand with signed integer type.I'm guessing your sizeof is returning unsigned long, with values that can be represented by a signed int and thus the resulting type is signed.
Skurmedel
The section above explains that if the unsigned type has a rank equal or larger than the signed type, the signed type will be converted to the unsigned type. That is if sizeof would return unsigned int. However your compiler might have sizeof return unsigned longs.
Skurmedel
+6  A: 

Look at this line:

for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)

In the first iteration, you are checking whether

-1 <= (TOTAL_ELEMENTS-2)

The operator size_of returns unsigned value and the check fails (-1 signed = 0xFFFFFFFF unsigned on 32bit machines).

A simple change in the loop fixes the problem:

for(d=0;d <= (TOTAL_ELEMENTS-1);d++)
printf("%d\n",array[d]);

To answer your other questions: C macros are expanded text-wise, there is no notion of types. The C compiler sees your loop as this:

for(d=-1;d <= ((sizeof(array) / sizeof(array[0]))-2);d++)

If you want to define an unsigned constant in a macro, use the usual suffix (u for unsigned, ul for unsigned long).

Miroslav Bajtoš
@Miroslav: What suffix do i used to define a unsigned constant in a macro.
goldenmean
fyi, the C idiom for looping over and array is this:int i;for (int i=0; i < arrLen; i++) { printf("%d\n", arr[i]);}You start at 0 and test for less than the length.
plinth
+2  A: 

Regarding your question about

#define AA (-64)

See Macro definition and expansion in the C preprocessor:

Object-like macros were conventionally used as part of good programming practice to create symbolic names for constants, e.g.

#define PI 3.14159

... instead of hard-coding those numbers throughout one's code. However, both C and C++ provide the const directive, which provides another way to avoid hard-coding constants throughout the code.

Constants defined as macros have no associated type. Use const where possible.

gimel
+1  A: 

Answering just one of your sub-questions:

To "define a constant in a macro" (this is a bit sloppy, you're not defining a "constant", merely doing some text-replacement trickery) that is unsigned, you should use the 'u' suffix:

#define UNSIGNED_FORTYTWO 42u

This will insert an unsigned int literal wherever you type UNSIGNED_FORTYTWO.

Likewise, you often see (in <math.h> for instance) suffices used to set the exact floating-point type:

#define FLOAT_PI 3.14f

This inserts a float (i.e. "single precision") floating-point literal wherever you type FLOAT_PI in the code.

unwind