views:

2073

answers:

5

Is the sizeof(enum) == sizeof(int), always ?

  • Or is it compiler dependent?
  • Is it wrong to say, as complier are optimized for word lengths (memory alignment) ie y int is the word-size on a particular complier.Does it means that there is no processing penalty if i use enums, as they would be word aligned.
  • Is it not better if i put all the return codes in an enum, as i clearly donot worry about the values it get, only the names while checking the return types. If this is the case wont #DEFINE be better as it would save memory.

What is the usual practise. I if i have to transport these return types over a network and some processing has to be done at the other end, what would u prefer enums/#defines/ const ints.

EDIT - Just checking on net, as complier dont symbolically link macros, how do people debug then, compare the integer value with the header file?

From Answers- I am adding this line below, as i need clarifications --

"So it is implementation-defined, and sizeof(enum) might be equal to sizeof(char), i.e. 1."

  • Does it not mean that complier checks for the range of values in enums, and then assign memory. I dont think so, ofcourse i dont know :). Can some please show explain what is "might be"
+9  A: 

C99, 6.7.2.2p4 says

Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type. The choice of type is implementation-defined,108) but shall be capable of representing the values of all the members of the enumeration. [...]

Footnote 108 adds

An implementation may delay the choice of which integer type until all enumeration constants have been seen.

So it is implementation-defined, and sizeof(enum) might be equal to sizeof(char), i.e. 1.

In chosing the size of some small range of integers, there is always a penalty. If you make it small in memory, there probably is a processing penalty; if you make it larger, there is a space penalty. It's a time-space-tradeoff.

Error codes are typically #defines, because they need to be extensible: different libraries may add new error codes. You cannot do that with enums.

Martin v. Löwis
Does it not mean that complier checks for the range of values in enums, and then assign memory. I dont think so, ofcourse i dont know :). Can some please show explain what is "might be"
Vivek Sharma
"The compiler does" is a useless statement. There are many compilers in the world, and some do it one way, and others do it a different way (even on the same hardware). If you want to know what a specific compiler does, you must name the compiler (including version and target CPU and operating system). It may well be that *your* compiler always uses int for enums.
Martin v. Löwis
+8  A: 

It is compiler dependent and may differ between enums. The following are the semantics

enum X { A, B };

// A has type int
assert(sizeof(A) == sizeof(int));

// some integer type. Maybe even int. This is
// implementation defined. 
assert(sizeof(enum X) == sizeof(some_integer_type));

Note that "some integer type" in C99 may also include extended integer types (which the implementation, however, has to document, if it provides them). The type of the enumeration is some type that can store the value of any enumerator (A and B in this case).

I don't think there are any penalties in using enumerations. Enumerators are integral constant expressions too (so you may use it to initialize static or file scope variables, for example), and i prefer them to macros whenever possible.

Enumerators don't need any runtime memory. Only when you create a variable of the enumeration type, you may use runtime memory. Just think of enumerators as compile time constants.

I would just use a type that can store the enumerator values (i should know the rough range of values before-hand), cast to it, and send it over the network. Preferably the type should be some fixed-width one, like int32_t, so it doesn't come to conflicts when different machines are involved. Or i would print the number, and scan it on the other side, which gets rid of some of these problems.


Response to Edit

Well, the compiler is not required to use any size. An easy thing to see is that the sign of the values matter - unsigned types can have significant performance boost in some calculations. The following is the behavior of GCC 4.4.0 on my box

int main(void) {
  enum X { A = 0 };
  enum X a; // X compatible with "unsigned int"
  unsigned int *p = &a;
}

But if you assign a -1, then GCC choses to use int as the type that X is compatible with

int main(void) {
  enum X { A = -1 };
  enum X a; // X compatible with "int"
  int *p = &a;
}

Using the option --short-enums of GCC, that makes it use the smallest type still fitting all the values.

int main() {
  enum X { A = 0 };
  enum X a; // X compatible with "unsigned char"
  unsigned char *p = &a;
}
Johannes Schaub - litb
I think now, rather than storing each command in unsigned char variable and equating it to a value, i shall store it in #define, as it has always been done. Enums will face byte-order issues. I think now i understand why #define have always been used for error-codes, state-names, commands... etc.
Vivek Sharma
Johannes Schaub - litb
It is not completely true there's no difference between enums and #defines: as you said for #defines the compiler doesn't even see the initial token, since it's substituted with the real value by the preprocessor.The compiler does see the enums though, and if you compile the code with debugging symbols, the debugger will show you the enumerated labels instead of their value, which greatly aids debugging.
Metiu
+6  A: 

Is the sizeof(enum) == sizeof(int), always

The ANSI C standard says:

Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type. The choice of type is implementation-defined. (6.7.2.2 Enumerationspecifiers)

So I would take that to mean no.

If this is the case wont #DEFINE be better as it would save memory.

In what way would using defines save memory over using an enum? An enum is just a type that allows you to provide more information to the compiler. In the actual resulting executable, it's just turned in to an integer, just as the preprocessor converts a macro created with #define in to its value.

What is the usual practise. I if i have to transport these return types over a network and some processing has to be done at the other end

If you plan to transport values over a network and process them on the other end, you should define a protocol. Decide on the size in bits of each type, the endianess (in which order the bytes are) and make sure you adhere to that in both the client and the server code. Also don't just assume that because it happens to work, you've got it right. It just might be that the endianess, for example, on your chosen client and server platforms matches, but that might not always be the case.

IRBMe
yes that is the concern, i have transfer some value to be used as command over the network, and i wish to make as efficient and robust as possible, ie y i need opinions what to use for command #defines or enums, the range of commands shall not be more than 20 commands, so according to all in char limit.I think i shall post it as a new question I would get a better response.
Vivek Sharma
The easiest thing to do would be to simply use an unsigned char then. You don't have to worry about endianess or encoding that way.
IRBMe
+2  A: 

On some compiler the size of an enum is depending on how many entry's are in the Enum. (less than 255 Entrys => Byte, More than 255 Entrys int) But this is depending on the Compiler and the Compiler Settings.

nuriaion
is there anyway i can force this. A nice input thanks though.
Vivek Sharma
Because of these Problems in our project (we have to use realy old C compiler) we decided to not use enum. But to define everything with #define
nuriaion
+2  A: 

No.

Example: The CodeSourcery compiler

When you define an enum like this:

enum MyEnum1 {
A=1,
B=2,
C=3
};
// will have the sizeof 1 (fits in a char)

enum MyEnum1 {
A=1,
B=2,
C=3,
D=400
};
// will have the sizeof 2 (doesn't fit in a char)

Details from their mailing list

Iulian Şerbănoiu
if that is the case its awesome.
Vivek Sharma