views:

548

answers:

3

I was wondering what is the rationale behind different styles of enum declaration on cocoa?

Like this:

enum { constants.. }; typedef NSUInteger sometype;

Is the reason to use typedef to get assigments to NSUInteger to work without casting?

Sometimes the typedef is either of NSInteger/NSUInteger, why not use NSInteger always? Is there real benefit using NSUInteger?

Enum tagnames are still used sometimes, like here on _NSByteOrder.

This answer was very useful too: http://stackoverflow.com/questions/707512/typedef-enum-in-objective-c.

+1  A: 

Whilst you could use something like

  typedef enum { constants... } sometype;

there is no guarantee about the eventual bitsize of the datatype. Well, thats not strictly true, but its true enough. Its better for APIs to be defined in concrete data sizes, than with something that can change depending on the compiler settings being used.

Jeff
+1  A: 

Is the reason to use typedef to get assigments to NSUInteger to work without casting?

The typedef is used to specify the base type for the enumeration values. You can always cast a enumeration value to another type as long as you truncate the value, by casting to a smaller type (NSUInteger to unsigned short).

NSInteger and NSUInteger were introduced to ease the 64 bits migration of applications, by providing a architecture/platform independent type for both signed and unsigned integers. This way, no matter how many bits the CPU has, applications do no need to be rewritten.

Sometimes the typedef is either of NSInteger/NSUInteger, why not use NSInteger always? Is there real benefit using NSUInteger?

The choice depends on the values in the enumeration. Some enumerations have a lot of values, so they need all the bits available:

  • NSInteger offers 2^31 positive and negative values (on 32 bits architecture).
  • NSUInteger offers 2^32 positive values (on 32 bits architecture).
  • If you enumeration is meant to only contain positive values, then use NSUInteger.
  • If you enumeration is meant to contain both positive and negative values, then use NSInteger.
  • NSUInteger is usually used for flag enumeration, as it provides 32 distinct values (on 32 bits architecture) to be combined at will.

I don't know if there a rule of choice in Apple development's team for that. I hope so...

Laurent Etiemble
You mean *2 to the power of* 31 and 32, right? Also, both types offer 2**32 distinct values; the difference is that with a signed type (such as `NSInteger`), half of them (2**31) are negative, and about half (2**31 - 1) are positive. With an unsigned type, all 2**32 are non-negative. And that's the real difference: If all your enumeration values, ever, are going to be positive, you can declare the type as `NSUInteger`. Conversely, if you're going to have negative values, you must use `NSInteger`.
Peter Hosey
You are right. I was meaning flag values. I have update my answer accordingly.
Laurent Etiemble
+3  A: 

Several reasons:

Reason 1: Flexibility:

enum lickahoctor { yes = 0, no = 1, maybe = 2 };

declares an enumeration. You can use the values yes, no and maybe anywhere and assign them to any integral type. You can also use this as a type, by writing

enum lickahoctor myVar = yes;

This makes it nice because if a function takes a parameter with the type enum lickahoctor you'll know that you can assign yes, no or maybe to it. Also, the debugger will know, so it'll display the symbolic name instead of the numerical value. Trouble is, the compiler will only let you assign values you've defined in enum lickahoctor to myVar. If you for example want to define a few flags in the base class, then add a few more flags in the subclass, you can't do it this way.

If you use an int instead, you don't have that problem. So you want to use some sort of int, so you can assign arbitrary constants.

Reason 2: Binary compatibility:

The compiler chooses a nice size that fits all the constants you've defined in an enum. There's no guarantee what you will get. So if you write a struct containing such a variable directly to a file, there is no guarantee that it will still be the same size when you read it back in (according to the C standard, at least -- it's not quite that bleak in practice).

If you use some kind of int instead, the platform usually guarantees a particular size for that number. Especially if you use one of the types guaranteed to be a particular size, like int32_t/uint32_t or NSInteger/NSUInteger.

Reason 3: Readability and self-documentation

When you declare myVar above, it's immediately obvious what values you can put in it. If you just use an int, or an NSInteger, it isn't. So what you do is you use

enum { yes, no, maybe };
typedef NSInteger lickahoctor;

to define a nice name for the integer somewhere near the constants that will remind people that a variable of this type can hold this value. But you still get the benefit of a predictable, fixed size and the ability to define additional values in subclasses, if needed.

uliwitness
NSInteger isn't guaranteed to be a particular size, it's guaranteed to be different on `i386` and `x86_64`. If you wrote out a binary file with a `lickahoctor` in on a Core Duo, upgraded your Mac and read it back in, hilarity would ensue.
Graham Lee
Of course, Graham. Brainfart there. Thanks for correcting that.
uliwitness