One version uses an array initialized with appropriate values, one byte per character in the code set (plus 1 to allow for EOF, which may also be passed to the classification functions):
static const char bits[257] = { ...initialization... };
int isupper(int ch)
{
assert(ch == EOF || (ch >= 0 && ch <= 255));
return((bits+1)[ch] & UPPER_MASK);
}
Note that the 'bits' can be used by all the various functions like isupper(), islower(), isalpha(), etc with appropriate values for the mask. And if you make the 'bits' array changeable at runtime, you can adapt to different (single-byte) code sets.
This takes space - the array.
The other version makes assumptions about the contiguousness of upper-case characters, and also about the limited set of valid upper-case characters (fine for ASCII, not so good for ISO 8859-1 or its relatives):
int isupper(int ch)
{
return (ch >= 'A' && ch <= 'Z'); // ASCII only - not a good implementation!
}
This can (almost) be implemented in a macro; it is hard to avoid evaluating the character twice, which is not actually permitted in the standard. Using non-standard (GNU) extensions, it can be implemented as a macro that evaluates the character argument just once. To extend this to ISO 8859-1 would require a second condition, along the lines of:
int isupper(int ch)
{
return ((ch >= 'A' && ch <= 'Z')) || (ch >= 0xC0 && ch <= 0xDD));
}
Repeat that as a macro very often and the 'space saving' rapidly becomes a cost as the bit masking has a fixed size.
Given the requirements of modern code sets, the mapping version is almost invariably used in practice; it can adapt at run-time to the current code set, etc, which the range-based versions cannot.