I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I've worked with, if you could even get a value for a boolean, that the value for FALSE is 0. Did TRUE used to be 0? If so, when did we switch?
If nothing else, bash shells still use 0 for true, and 1 for false.
It might be in reference to a result code of 0 which in most cases after a process has run, a result code of 0 meant, "Hey, everything worked fine, no problems here."
In any language I've ever worked in (going back to BASIC in the late 70s), false has been considered 0 and true has been non-zero.
I'm not certain, but I can tell you this: tricks relying on the underlying nature of TRUE and FALSE are prone to error because the definition of these values is left up to the implementer of the language (or, at the very least, the specifier).
I can't recall TRUE
being 0
.
0
is something a C programmer would return to indicate success, though. This can be confused with TRUE
.
It's not always 1
either. It can be -1
or just non-zero.
For languages without a built in boolean type, the only convention that I have seen is to define TRUE as 1 and FALSE as 0. For example, in C, the if
statement will execute the if clause if the conditional expression evaluates to anything other than 0.
I even once saw a coding guidelines document which specifically said not to redefine TRUE and FALSE. :)
If you are using a language that has a built in boolean, like C++, then keywords true
and false
are part of the language, and you should not rely on how they are actually implemented.
Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.
Instead of this
int x;
....
x = 0;
if (x) // might be ambiguous
{
}
Make is explicit
if (0 != x)
{
}
The 0 / non-0 thing your coworker is confused about is probably referring to when people use numeric values as return value indicating success, not truth (i.e. in bash scripts and some styles of C/C++).
Using 0 = success allows for a much greater precision in specifying causes of failure (e.g. 1 = missing file, 2 = missing limb, and so on).
As a side note: in Ruby, the only false values are nil and false. 0 is true, but not as opposed to other numbers. 0 is true because it's an instance of the object 0.
I recall doing some VB programming in an access form where True was -1.
In languages like C there was no boolean value so you had to define your own. Could they have worked on a non-standard BOOL overrides?
System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.
eg: IF (I-15) 10,20,10
would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.
Sam is right about the problems of relying on specific knowledge of implementation details.
Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.
In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.
From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.
On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.
Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.
DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!
DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!
IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR
It's easy to get confused when bash's true/false return statements are the other way around:
$ false; echo $?
1
$ true; echo $?
0
For the most part, false is defined as 0, and true is non-zero. Some programming languages use 1, some use -1, and some use any non-zero value.
For Unix shells though, they use the opposite convention.
Most commands that run in a Unix shell are actually small programs. They pass back an exit code so that you can determine whether the command was successful (a value of 0), or whether it failed for some reason (1 or more, depending on the type of failure).
This is used in the sh/ksh/bash shell interpreters within the if/while/until commands to check conditions:
if command then # successful fi
If the command is successful (ie, returns a zero exit code), the code within the statement is executed. Usually, the command that is used is the [ command, which is an alias for the test command.
General rule:
Shells (DOS included) use "0" as "No Error"... not necessarily true.
Programming languages use non-zero to denote true.
That said, if you're in a language which lets your define TRUE of FALSE, define it and always use the constants.
The the C language, before C++, there was no such thing as a boolean. Conditionals were done by testing ints. Zero meant false and any non-zero meant true. So you could write
if (2) { alwaysDoThis(); } else { neverDothis(); }
Fortunately C++ allowed a dedicated boolean type.
I have heard of and used older compilers where true > 0, and false <= 0.
That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.
Similarly, I've worked on systems where NULL wasn't zero.
I worked at a company with a large amount of old C code. Some of the shared headers defined their own values for TRUE and FALSE, and some did indeed have TRUE as 0 and FALSE as 1. This led to "truth wars":
/* like my constants better */
#undef TRUE
#define TRUE 1
#undef FALSE
#define FALSE 0