Signed zero is zero with an associated
sign. In ordinary arithmetic, −0 = +0
= 0. However, in computing, some number representations allow for the
existence of two zeros, often denoted
by −0 (negative zero) and +0 (positive
zero). This occurs in some signed
number representations for integers,
and in most floating point number
representations. The number 0 is
usually encoded as +0, however it can
be represented by either +0 or −0.
The IEEE 754 standard for floating
point arithmetic (presently used by
most computers and programming
languages that support floating point
numbers) requires both +0 and −0. The
zeroes can be considered as a variant
of the extended real number line such
that 1/−0 = −∞ and 1/+0 = +∞, division
by zero is only undefined for ±0/±0.
Negatively signed zero echoes the
mathematical analysis concept of
approaching 0 from below as a
one-sided limit, which may be denoted
by x → 0−, x → 0−, or x → ↑0. The
notation "−0" may be used informally
to denote a small negative number that
has been rounded to zero. The concept
of negative zero also has some
theoretical applications in
statistical mechanics and other
disciplines.
It is claimed that the inclusion of
signed zero in IEEE 754 makes it much
easier to achieve numerical accuracy
in some critical problems,1 in
particular when computing with complex
elementary functions.[2] On the other
hand, the concept of signed zero runs
contrary to the general assumption
made in most mathematical fields (and
in most mathematics courses) that
negative zero is the same thing as
zero. Representations that allow
negative zero can be a source of
errors in programs, as software
developers do not realize (or may
forget) that, while the two zero
representations behave as equal under
numeric comparisons, they are
different bit patterns and yield
different results in some operations.
For more information see Signed Zero wiki page.