Let's assume that your platform has eight-bit bytes, and suppose we have the bit pattern 10101010
. To a signed char
, that value is -86. For unsigned char
, though, that same bit pattern represents 170. We haven't moved any bits around; it's the same bits, interpreted two different ways.
Now for char
. The standard doesn't say which of those two interpretations should be correct. A char
holding the bit pattern 10101010
could be either -86 or 170. It's going to be one of those two values, but you have to know the compiler and the platform before you can predict which it will be. Some compilers offer a command-line switch to control which one it will be. Some compilers have different defaults depending on what OS they're running on, so they can match the OS convention.
In most code, it really shouldn't matter. They are treated as three distinct types, for the purposes of overloading. Pointers to one of those types aren't compatible with pointers to another type. Type calling strlen
with a signed char*
or an unsigned char*
; it won't work.
Use signed char
when you want a one-byte signed numeric type, and use unsigned char
when you want an one-byte unsigned numeric type. Use plain old char
when you want to hold characters. That's what the programmer was thinking when writing the typedef you're asking about. The name "byte" doesn't have the connotation of holding character data, whereas the name "unsigned char" has the word "char" in its name, and that causes some people to think it's a good type for holding characters, or that it's a good idea to compare it with variables of type char
.
Since you're unlikely to do general arithmetic on characters, it won't matter whether char
is signed or unsigned on any of the platforms and compilers you use.