views:

2085

answers:

21

I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I've worked with, if you could even get a value for a boolean, that the value for FALSE is 0. Did TRUE used to be 0? If so, when did we switch?

+3  A: 

If nothing else, bash shells still use 0 for true, and 1 for false.

zigdon
+15  A: 

It might be in reference to a result code of 0 which in most cases after a process has run, a result code of 0 meant, "Hey, everything worked fine, no problems here."

stephenbayer
With the justification that there's nothing more to say if it worked, whereas if it failed, there are plenty of non-zero return codes to indicate what kind of failure it was.
slim
ah ... good old COM
MagicKat
And hence the unix command 'true' does nothing but issue a return code of 0.
Justsalt
A: 

In any language I've ever worked in (going back to BASIC in the late 70s), false has been considered 0 and true has been non-zero.

17 of 26
+3  A: 

I'm not certain, but I can tell you this: tricks relying on the underlying nature of TRUE and FALSE are prone to error because the definition of these values is left up to the implementer of the language (or, at the very least, the specifier).

Sam Erwin
well, in a language like perl where there is a loose definition for truth, it's something that you learn to not only allot for but to love. But in the type safety realm of most compiled languages, true and false are rather strict in their definitions.
stephenbayer
Yes, but it does become a potential issue - even in the same language - if you're converting between compilers, platforms, or even just updating to the latest revision of your language.That's why I espouse being implementation agnostic unless you're very certain of your platform or you need to be.
Sam Erwin
A: 

I can't recall TRUE being 0. 0 is something a C programmer would return to indicate success, though. This can be confused with TRUE.

It's not always 1 either. It can be -1 or just non-zero.

GSerg
A: 

For languages without a built in boolean type, the only convention that I have seen is to define TRUE as 1 and FALSE as 0. For example, in C, the if statement will execute the if clause if the conditional expression evaluates to anything other than 0.

I even once saw a coding guidelines document which specifically said not to redefine TRUE and FALSE. :)

If you are using a language that has a built in boolean, like C++, then keywords true and false are part of the language, and you should not rely on how they are actually implemented.

Dima
I believe the keywords true and false have values specified by the standard, so you CAN rely on them.
Mark Ransom
A number of languages, including older versions of VB, define true/false as -1/0. That is so that bitwise operations and logical operations do the same thing.
A: 

Even today, in some languages (Ruby, lisp, ...) 0 is true because everything except nil is true. More often 1 is true. That's a common gotcha and so it's sometimes considered a good practice to not rely on 0 being false, but to do an explicit test. Java requires you do this.

Instead of this

int x;    
....
x = 0;
if (x)  // might be ambiguous
{
}

Make is explicit

if (0 != x)
{
}
David Nehme
Technically in Java if statements are only valid for boolean expressions and integers do not cast to booleans. In Java, true is true and false is false and neither is a number.
Mr. Shiny and New
+12  A: 

The 0 / non-0 thing your coworker is confused about is probably referring to when people use numeric values as return value indicating success, not truth (i.e. in bash scripts and some styles of C/C++).

Using 0 = success allows for a much greater precision in specifying causes of failure (e.g. 1 = missing file, 2 = missing limb, and so on).

As a side note: in Ruby, the only false values are nil and false. 0 is true, but not as opposed to other numbers. 0 is true because it's an instance of the object 0.

webmat
A: 

I recall doing some VB programming in an access form where True was -1.

Nathan Feger
I think many BASIC flavors evaluate zero as false and anything non-zero as true, but define the constant False as 0 and the constant True as -1 (or all 1's in binary).
C. Dragon 76
Most BASIC flavours have no logical operators; all operators are bitwise. Hence (NOT 0) is -1 and comparisons evaluate to -1 if true and 0 if false.
Artelius
A: 

In languages like C there was no boolean value so you had to define your own. Could they have worked on a non-standard BOOL overrides?

Nick Berardi
+1  A: 

System calls in the C standard library typically return -1 on error and 0 on success. Also the Fotran computed if statement would (and probably still does) jump to one of three line numbers depending on the condition evaluating to less than, equal to or greater than zero.

eg: IF (I-15) 10,20,10

would test for the condition of I == 15 jumping to line 20 if true (evaluates to zero) and line 10 otherwise.

Sam is right about the problems of relying on specific knowledge of implementation details.

Richard
+1  A: 

I remember PL/1 had no boolean class. You could create a bit and assign it the result of a boolean expression. Then, to use it, you had to remember that 1 was false and 0 was true.

+1  A: 

Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.

In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.

From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.

On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.

Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.

Javier
A: 

DOS and exit codes from applications generally use 0 to mean success and non-zero to mean failure of some type!

DOS error codes are 0-255 and when tested using the 'errorlevel' syntax mean anything above or including the specified value, so the following matches 2 and above to the first goto, 1 to the second and 0 (success) to the final one!

IF errorlevel 2 goto CRS
IF errorlevel 1 goto DLR
IF errorlevel 0 goto STR
Ray Hayes
A: 

It's easy to get confused when bash's true/false return statements are the other way around:

$ false; echo $?
1
$ true; echo $?
0
neu242
A: 

For the most part, false is defined as 0, and true is non-zero. Some programming languages use 1, some use -1, and some use any non-zero value.

For Unix shells though, they use the opposite convention.

Most commands that run in a Unix shell are actually small programs. They pass back an exit code so that you can determine whether the command was successful (a value of 0), or whether it failed for some reason (1 or more, depending on the type of failure).

This is used in the sh/ksh/bash shell interpreters within the if/while/until commands to check conditions:

if command
then
   # successful
fi

If the command is successful (ie, returns a zero exit code), the code within the statement is executed. Usually, the command that is used is the [ command, which is an alias for the test command.

A: 

General rule:

  1. Shells (DOS included) use "0" as "No Error"... not necessarily true.

  2. Programming languages use non-zero to denote true.

That said, if you're in a language which lets your define TRUE of FALSE, define it and always use the constants.

hometoast
A: 

The funny thing is that it depends on the language your are working with. In Lua is true == zero internal for performance.. Same for many syscalls in C.

unexist
A: 

The the C language, before C++, there was no such thing as a boolean. Conditionals were done by testing ints. Zero meant false and any non-zero meant true. So you could write

if (2) { alwaysDoThis(); } else { neverDothis(); }

Fortunately C++ allowed a dedicated boolean type.

DJClayworth
Anyone want to explain why this was voted down?
DJClayworth
A: 

I have heard of and used older compilers where true > 0, and false <= 0.

That's one reason you don't want to use if(pointer) or if(number) to check for zero, they might evaluate to false unexpectedly.

Similarly, I've worked on systems where NULL wasn't zero.

davenpcj
+4  A: 

I worked at a company with a large amount of old C code. Some of the shared headers defined their own values for TRUE and FALSE, and some did indeed have TRUE as 0 and FALSE as 1. This led to "truth wars":

/* like my constants better */
#undef TRUE
#define TRUE 1

#undef FALSE
#define FALSE 0
Sam Stokes
Vote up because I've been bitten by that in the past ;)
Alaric