tags:

views:

107

answers:

6

The other day a user reported a bug to me about a toolbar item that was disabled when it should be been enabled. The validation code (simplified for your benefit) looked like:

- (BOOL) validateToolbarItem: (NSToolbarItem *) toolbarItem {

    NSArray* someArray = /* arrray from somewhere*/

    return [someArray count];
}

It took me a few minutes to realize that -count returns a 32-bit unsigned int, while BOOL is an 8-bit signed char. It just so happened that in this case someArray had 768 elements in it, which meant the lower 8-bits were all 0. When the int is cast to a BOOL upon returning, it resolves to NO, even though a human would expect the answer to be YES.

I've since changed my code to return [someArray count] > 0; however, now I'm curious why is BOOL really a signed char. Is that really "better" in some way then it being an int?

A: 

It's smaller, that's all really.

alphomega
+2  A: 

An obvious answer is that it's four times smaller (on typical 32-bit and 64-bit architectures), and also doesn't have any alignment requirements.

Pavel Minaev
The irony being that on most compilers / architectures, it will pad out the other 3 bytes so that it can write much faster 32-bit read/writes
Paul Betts
@Paul: I've never seen any compiler pad `char` in structs - can you give an example? It most certainly won't do it in arrays due to contiguous storage requirement.
Pavel Minaev
@Pavel: struct x { int a; char b; int c; }; sizeof(x) == 12. That's probably what he's referring to.
sharth
@sharth: the padding in that case is to align `c` though, and doesn't have much to do with the type of `b` specifically.
Pavel Minaev
I'll see if I can find an example via kd, I may be full of it though. You're definitely right wrt arrays.
Paul Betts
+1  A: 

A boolean value requires only a single bit (0 or 1), however standard systems deal with bytes as the smallest unit. A bool represented as a byte, at 8 bits, is 4 times smaller than a 32-bit integer, thus the motivation for byte over integer.

Noah Watkins
+1  A: 

Bools are useful in saving a bit of space and in constraining the value to 0 or 1. It's a smaller type for the same reason you might use a float over a double, or a short over a long. Just space concerns.

This is why it's a good idea to be explicit with your casting, and in the case of a boolean value, performing an actual logical comparison between two of the same type to net an actual bool instead of a truncated value.

M2tM
+2  A: 

Throwback to simpler times.

The BOOL type was created back when CPUs naturally worked with 8 bit types, rarely padding to 16 or 32 bits. Yet, memory was scarce and cramming 1 bit into 4 bytes would actually eat a noticeable chunk of additional memory.

Note that BOOL likely predates C++'s _bool by quite a while (iirc-- they may be about the same age. When NeXT chose Objective-C, C++ was about the same popularity.).

bbum
+2  A: 

The answers given (thus far) focus on why BOOL isn't an int. That answer is pretty clear: a char is smaller than an int, and when Objective-C was designed back in the 80s, shaving off a few bytes was always good.

But your question also seems to be asking, "Why is BOOL signed rather than unsigned?" For that, we can look where BOOL is typedef'ed, in /usr/include/objc/objc.h:

typedef signed char     BOOL; 
// BOOL is explicitly signed so @encode(BOOL) == "c" rather than "C" 
// even if -funsigned-char is used.

So there's an answer: the Objective-C designers didn't want to typedef BOOL to char, because on some systems, under some compilers (and remember that Objective-C predates ANSI C, so C compilers differed), a char was signed, and under some, unsigned. The designers wanted @encode(BOOL) to return a consistent value across platforms, so they included the signedness in the typedef.

But that still begs the question: why signed rather than unsigned? I don't have a definitive answer for that; I imagine the reason is that they had to pick one or the other, and decided to go with signed. If I had to further conjecture, I'd say it's because ints are signed by default (that is, if they don't include a signedness qualifier).

mipadi