views:

1732

answers:

20

So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did.

I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster.

It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid.

What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems.

Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate?

[EDIT]

Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

+7  A: 
eed3si9n
Your examaple of (1.0 / 3.0) * 3.0 == 1.0 is probably not a very good example because this depends on representation. Quite often it doesn't exactly equal the original value (eg. 0.999999999...)
Mitch Wheat
Great. So we can represent 0.33333333.... but we can't represent 0.1. What genius thought up that one.
Kibbee
In base 2, you can't represent 0.33333333... exactly.
Mitch Wheat
Aside from a fraction type there will aways be trivial values that can't be represented in any finite representation. Even then, transcendental function muck it up.
BCS
@Mitch, == comparison is meaningless in floating point, because everything is approximation. I am not claiming float can express 1.0/3.0 accurately, but (1.0 / 3.0) * 3.0 will come close enough to 1.0 because it has rounding built in.
eed3si9n
@eed3si9n: That is not what your answer implied.
Mitch Wheat
but binary floating point CAN represent 3/8, or 7/64 or 3/32, or 123/512.. exactly, (cause they're fractional powers of 2), whereas a decimal representation cannot..
Charles Bretana
3/8=125/1000, 7/64=109375/1000000, n/2^m = n*5^m/10^m
BCS
Why not just represent it as a fraction?
John Nilsson
@John Nilsson, fraction is useful if the data and calculations are in clean fractions, but in reality when you need decimal number (fixed or float), it's messier. I can imagine fraction quickly becoming unintuitive and computationally expensive for complicated stuff.
eed3si9n
A: 

They are so prolific because there are more of them! Dividing one integer by another will more often produce a fractional result.

The integers are countable, whereas the reals (floats) are not.

Mitch Wheat
Is that meant to be funny?
Kibbee
now I'm going to complain my language doesn't have a native irrational number type.
Jimmy
About the integers being counatable and reals not? No. I'm serious. It's taught at all uni level mathematics courses.
Mitch Wheat
floats are most certainly countable. They have a fixed binary representation, and each valid float can be enumerated. There is a finite number of floats.
Dour High Arch
floats are, but The Reals (|R) are not.
Mitch Wheat
Yes, but nor are The Integers.
TraumaPony
The rationals are countable, The reals are not... But everything in a computer is "countable" because it's ... digital (a 64 bit float can only have 2^64 different values) ... Not every real can be represented by any scheme, bit not every rational can either, even though they're countable.
Charles Bretana
Every rational can be represented perfectly in a digital computer (up to memory limits, I'm not sure if the finite memory problem is what Charles is saying). The easiest way to see it is as a normalized pair of integers, but probably the best actual way is with the Stern-Brocot tree.
Doug McClean
And "countable" does not mean finite. Countable means they can be put into a one-to-one correspondence with the integers.
Charles Bretana
Doug, Take any reprsentation scheme that exists (including those that represent rationals as 2 integral values) and I can easily tell you a number that scheme cannot represent, simply by picking a number with an exponent too big or too small to fit in the space allowed to represent it.
Charles Bretana
There are an infinite (but countable) number of rationals, and no computer with a finite amount of storage can differentiate between an infinite number of different values in that finte memory.
Charles Bretana
@TraumaPony: incorrect. The integers are countable (in the mathematical sense)
Mitch Wheat
+3  A: 

Faster than what? Integers? FP is slower than ints.

FP is the only choice for anything other than rational numbers and most things that aren't integers, aren't rational either.

BCS
You're right: floating point is the only rational choice. Ha! Hahaha!
erickson
What about fixed-point numbers? They're seldom the right choice, but they are an alternative.
Kevin
fixed point works, but has most of the issues of FP and then some. The only advantage is you can get reasonable perf without a FP unit. (some work in that direction: http://klabs.org/mapld04/presentations/session_e/9_e186_buehler_s.pdf)
BCS
+5  A: 

the standard binary float is an integer times a power of 2. in base 2, it is exact up to a certain number of digits. The problem in the question is then, why isn't it base 10 instead of base 2. Well, i think that is where the decimal types (or base 10 floats) come in.

Jimmy
The standard float is actually not simply an integer times a power of 2. It's (1 + fraction) times a power of 2.
eed3si9n
@eed3si9n: Actually, it is an integer times a power of 2, but its usually represented with a biased exponent. So when you write 1 + fraction, you're just dividing the integer mantissa by 2**m where m is the mantissa size in bits.
Chris Dodd
+2  A: 

Some things are rather hard to do without using inexact arithmetic, working out sin(22) exactly is perhaps slightly more complex than most hardware could deal with.

tatwright
+31  A: 

Because there's no alternative. You need to be able to represent values of very different magnitudes, from the tiniest fraction to ridiculous numbers like 10^300.

And to provide some measure of efficiency, a fixed-size representation has to be used. A primitive datatype that could conceivably grow to hundreds of bytes just won't work out.

So the only plausible approach is to use a datatype which allows you a certain number of significant digits, and then an exponent so you can slide the decimal point back and forth to represent numbers of pretty much any magnitude. And that's what floating point numbers are. And yes, they have rounding errors, because they have finite precision, just like any number you might write down in base 10 on a notepad. It too contains rounding errors. Try writing down a simple number like "a third". Now try writing down all the digits of pi.

In both cases, you'll lose precision and get rounding errors. And obviously a CPU has to deal with the same constraints (it too has to pick a representation, even if it means some numbers can't be expressed accurately, like a third in base10, and it has to use a finite number of bits, so it can only represent a finite number of digits).

No matter how you represent your numbers, you will run up against the limits of the finite number of bits allocated for the number, and the limitations that not all fractional values can be represented accurately in a given number base.

Integers aren't perfect either. Try multiplying 3 billion by 2 in a standard 32-bit integer, and see what you get. Now try dividing 8 with 3, and see what you get. That's right, integers have rounding errors too and a limited number of digits too.

It's not that floating point is a bad choice of representation, it's just that it has its limitations, just like any other option would, and too many programmers are unaware of this.

Edit: In response to your edit, no, base10 is not "better". It suffers from the exact same problems for many values, just not 0.1. But you'd still get rounding errors when performing computations with a finite base10 datatype.

The key is to accept that these datatypes can never be entirely accurate, and write your program so that it can handle this uncertainty. And you'd have to do this whether your program used base 2, 10, 16 or 137.

jalf
Your conclusion is correct (the last paragraph) but the rational behind is wrong (the first 2 paragraphs). It takes 8 microseconds to compute 10**300 in Python (25 -- 600) if it were the mere question of efficiency I would prefer a bit slow but correct answer. The answer is deeper http://is.gd/9ZwX
J.F. Sebastian
So based on what *you* would prefer, you conclude that everyone else's rationale is wrong? ;)Taking multiple microseconds to perform a single operation is not always acceptable.
jalf
Have you read my answer? The point is even if we disregard performance issues there are fundamental reasons why a computer can't always get a precise answer (regardless of time it takes).
J.F. Sebastian
Which answer? A link which points to this thread? And I don't see how that contradicts what I said. A computer can never get a precise answer, and floating-point happens to be a good, efficient approximation.
jalf
Here's correct link: http://stackoverflow.com/questions/327020/why-are-floating-point-values-so-prolific#327156 (is.gd eaten the anchor tag)
J.F. Sebastian
+2  A: 

Floating point numbers are the default choice because they can represent equally well tiny and huge numbers, which is often the need in scientific computing. You cannot accomplish the same using fixed point numbers with constant (and reasonable) size.

Both floating point (mostly with respect to addition of small with big numbers) and fixed point numbers are subject to rounding. The difference is that in fixed point computation the rounding error must be considered negligible with respect to the least significant digit of the data type, whereas in a floating point operation it must be considered negligible with respect to the result.

Now, the main issue with floating point rounding is that, while there are strategies to deal with it (say, in numerical integration), the rounding error is often difficult to estimate a priori in a general computation, since it depends on the final as well as all the intermediate results.

When 1) you don't have to represent huge numbers like 10^50 as well as tiny numbers like 10^-50, 2) computations are mainly sums and differences, and 3) rounding must be kept strictly under control, as is the case with money amounts, fixed point numbers are indeed the best choice.

Federico Ramponi
+11  A: 

English answer: because mathematicians have proven that it's impossible to represent real numbers exactly on a digital computer. Even if you give your computer an infinite number of bytes of memory and you only want to be able to exactly represent all real numbers between 0 and 1, it's still provably impossible.

Mathematical answer: real numbers form an uncountably infinite set. Computer memory is made of discrete bits. Even assuming a (countably) infinite number of memory bits, digital computers can never represent an uncountably infinite set exactly.

As jalf suggested, try thinking about how to exactly represent irrational numbers like pi and e.

Another thought to ponder: why should computers use base 10? The only thing special about base 10 is that we have 10 fingers so most societies that have mathematical abilities use base 10.

Mr Fooz
"Even if you give your computer an infinite number of bytes of memory and you only want to be able to exactly represent all real numbers between 0 and 1" - if that was the case, you *could* represent real numbers.
Federico Ramponi
And don't stick with the uncountability subject. The set of rational numbers is indeed countable, but still you can't represent exactly all the rational numbers between 0 and 1 with a finite memory computer.
Federico Ramponi
Not true... 1, 2, 3, etc. are all real numbers... to be accurate, no scheme can exactly represent EVERY real number... but each scheme can exactly represent a different subset of all the real numbers...
Charles Bretana
@Charles: Each scheme can only represent a countable subset.
Mr Fooz
@Frederico: the real numbers between 0 and 1 still form an uncountable set, thus representing that set is "still provably impossible."
Mr Fooz
Mr Fooz, Yes, By definition, any finite set must be countable... And that's the crux of the issue. Nothing in a computer can discriminate between all the values in any infinite set, countable or not...
Charles Bretana
@Charles B: very well put point.
Axeman
hmm. I don't buy the "10 fingers" thing. I know a dude who only has three fingers (mining accident) and he don't use base-3... :)
KristoferA - Huagati.com
+4  A: 

There is absolutely nothing wrong with floating point numbers if you know what you're doing. The difficulty comes from people expecting exactness when it's made quite explicit that floating point numbers do not do that in the vast majority of cases.

People complain about rounding errors but, if you think about it, let's say that 1/3 turns out to be 0.333333333 in floating point. The error there is at most 0.0000000004 divided by 0.333333333 or 4 parts per 333,333,333 which turns out to be 1.2 millionths of a percent.

That is truly insignificant. And, the good thing is, whether you're talking about numbers as small as 10-43 or as high as a Googolplex (1010100), the relative error remains small.

People say there is also an issue with adding numbers like 10100 and 10-100 since there aren't enough bits to successfully include both numbers, but you need to understand that 10100 plus 10-100 is, to all intents and purposes, 10100.

Mathematicians understand this concept just fine; it's mostly the laypeople that can't wrap their heads around it.

Your wish for a language that can provide a "viable alternative, non-floating point, decimal value" is difficult to achieve since it would soon run out of storage if you required perfect accuracy (a la the aforementioned 10100 + 10-100 requiring 200 digits of storage for a single number and 1/3 requiring an infinite number of digits).

paxdiablo
NOT for all intents and purposes. E.g.: universe simulation. You could have an object at 10^100, and 10^100 + 10^-100; it's NOT the same.
TraumaPony
Nothing in a computer can give perfect accuracy for every possible real number. Each numeric representation scheme includes a (different) set of real numbers that it can represent exactly. For each scheme, all other numbers must be rounded to the nearest one it CAN represent.
Charles Bretana
@Trauma, when you're talking distances like 10^100, the only force applicable is gravitational and that follows an inverse square law (power diminishes relative to the distance squared). Do the calculations and you'll discover a 10^-100 difference has NO relevant effect on 10^100 distance.
paxdiablo
Unless you know more about physics than I do, which is a possiblity, I guess. In which case, feel free to educate me.
paxdiablo
Even with decimal arithmetic, you can still hold very large and very small numbers with limited precision: just use scientific notation. For example, the REXX programming language uses decimal floating-point, in which the number 250 trillion can be represented as 2.5E14, and its reciprocal as 4E-15. Also, exact decimal arithmetic is very relevant to the real world. If I buy, say, a $24.95 gift card so I can buy an item online for $24.95, I will be disappointed if the transaction does not go through on account of me being 1/200000000 of a cent short.
Robert L
@Robert L, I agree that the OP's reason for mentioning decimal representations was probably for dealing with money
Jon Rodriguez
+4  A: 

I am beginning to delve into Lisp. It seems that lisp may have a type called RATIO so that fractions can be expressed. Here are some examples...

* (/ 1 10)

1/10
* (/ 1 3)

1/3
* (describe 1/10)

1/10 is a RATIO.
* (describe 1/3)

1/3 is a RATIO.
* (* 1/3 3)

1
*

When I multiply one third times three, I get one as expected.

Mark Stock
Yep, this is a very common thing among Lisps. You'll find ratios in Common Lisp, Scheme, Clojure, etc.
Jyaan
+14  A: 

There is such misunderstanding about this. Binary Floats are NOT less accurate than any other representation. The difference is not the level of accuracy, but simply in which numbers can be represented exactly. Binary floats can represent floating point numbers which are powers of 2 (in both mantissaa and exponent) exactly, where decimal floats can represent decimal numbers exactly.

No representation scheme in a computer can give perfect accuracy for every possible real number. Each numeric representation scheme includes a (different) set of real numbers that it can represent exactly. For each scheme, all other numbers must be rounded - to the nearest one it CAN represent.

Irrational numbers cannot be represented exactly in ANY scheme. For rational numbers (those that can be extressed as a ratio or fraction) what determines whether a specific representation scheme can represent that number exactly depends on the factors of the ratio's denominator.

A Binary float can represent exactly any number which can be represented as a fraction with a denominator that is a factor of 2s (1/2, 1/4, 17/32, etc but not 1/5, 7/10, .etc). A decimal float can represent any number that can be written as a fraction with a denominator which can be factored into 2s and 5s (1/5, 3/10, 17/20, 4/75, etc, but not 1/3, 1/6, etc.).

Decimal floats have the SAME rounding errors when attempting to represent a number that is not in THEIR inventory (like 1/32, or 7/64, which a binary float can represent with no rounding error)

What these numbers are for, in either case, is for when you are "measuring" continuously variable things, like height, weight, distance, time intervals, density, pressure, frequencies, etc., etc. and not "counting" things... It is only when counting things that "exact" accuracy is necessary.. When measuring things, it is meaningless to expect two values to be exactly equal to each other.. (How can one person's weight be EXACTLY equal to anothers? or one time interval be exactly equal to another? (for quantum mechanical purists, please do not go there.!!!)

When you are counting, (including counting money, where we count pennies by using 2-place decimal values) you need to be able to compare values exactly, then you need to use integer or decimals.. When you are measuring things, otoh, use regular IEEE binary floating point numbers, or whatever you want, but don't expect them to be the exactly the same...

Charles Bretana
+1: Easily the best answer here
Software Monkey
Actually, base 10 float can exactly represent any number that base 2 can (as 2 is a factor of 10), though it make take more digits to do so. Other good choices for base would be 6 (gets anything with factors 2 or 3) and 30 (gets 2, 3, and 5, so can exactly represent all decimals).
Chris Dodd
@Chris, thx for yr comment. I have edited answer to reflect that distinction
Charles Bretana
A: 

Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate?

Due to some equations that arise from real-world problems do not have integer solutions, e.g.:

2*x = 1

Or

x*x = 2

Or

x*x = -1

Or

ei*x = -1

Therefore there is no finite representation (whether fixed-point or floating-point) that could represent x in all such cases precisely. In other words computers just can't be always completely accurate.

Thus we have to resort to approximations and a floating point representation is a superior than a fixed point one in that context as many answers have explained already.

J.F. Sebastian
How do you represent the solution to X*X = -1 in a float ? (grin)
Charles Bretana
@CharlesB, @JF didn't say it was a float, just that it wasn't an integer; however I've yet to see native language support for imaginary numbers (although plenty of classes for it).
paxdiablo
@Pax: I think both gcc (as an extension) and C99 have a native complex data type.
CesarB
A: 

Floating point is either much slower than integer or consumes many more gates than an integer alu. IEEE-754 in particular. Some processors today are willing to consume the gates so that the alu and the fpu perform the operation in a single cycle. Some even let the fpu computer faster than the alu. Do not mistake this as an assumption that fpu is fast or cheap (relative to fixed point). Floating point is very expensive.

Both fixed and floating point have problems in computers, programmers are particularly lazy and floating point saves you more often than fixed point, so naturally programmers are going to use it and move on with their lives.

dwelch
+5  A: 

Short form of my answer:

  1. It hasn't always been so
  2. C-implemented languages have made the decimal number common in mainframe languages an extra load.
  3. Users of those language have carried over that idea as advisories to their code.

The C-language hypothesis: So many language implementations have C underlying them. C is a stripped-down language. And really was the choice when client-server was coming along, in the early 80s. Perl, Ruby, Python, and Java are all built with C-innards.

When Java was an all-object language, they introduced primitives to speed up the VM, and they pretty much left it to C primitives. Before this, there is little reason to believe that Java's BigDecimals would be seen as that much more expensive than creating an Integer. Java's travails have resulted in software policies against Object creation in some instances, and just a awareness in programmers of the cost of non-primitives.

Thus for the C/Java family of languages, economy has been more important than correctness. On the other side, decimals abound in the mainframe world! You'll find that mainframe languages, like COBOL and PL/1, offer defined-width numeric fields as part of the language. When you declare a number in COBOL, you have to add an extra flag to make sure it's computational and not numeric. Floating-point numbers are typically only invoked for engineering applications. I believe though that these complex implementations represent what C is "stripped down" from.

This is not to say that C-implemented-languages don't have decimal types. In Perl, they are additional packages to use. Ruby follows suit. Python has a Bignum. Java has java.lang.BigDecimal. But all of them are arbitrary precision, that compute fixed precision numbers correctly. But I wonder at what expense to the optimal implementation of fixed-width numbers.

Computing digit-by-digit is more expensive in COBOL as well, which is why software policies were introduced to use COMP-3, where possible some places.

Axeman
I agree, I would just like to add that the distinction goes back to hardware, not just languages. Mainframe processors (since System/360) have support for decimal operations in hardware, while other processors (x86 on desktops etc.) have not.
J S
Well, I think that's the drawback of portable code--and thus why C chose not to carry the baggage everywhere. I would put that down to C's "stripped down" value, again. So C created a lowest common denominator approach, IMO.
Axeman
x86 does have decimal arithmetic operations (DAA and DAS), but they're far more limited than binary arithmetic.
dan04
A: 

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

a way for equality test on floats from that article:

if (abs(x-y) < epsilon) ...

I think epsilon should be at least less than the minimum difference between your inputs.

also note to the reminder: ...where epsilon is sufficiently small and tailored to the application, such as 1.0E-13 - see machine epsilon). The wisdom of doing this varies greatly. It is often better to organize the code in such a way that such tests are unnecessary.

+2  A: 

Interestingly enough, binary is really the most efficient base to store floating point values in. Benford's Law suggests that lower bases (base 2 for example) in a floating point system are more accurate on average than higher bases (base 10 or 16).

Eclipse
A: 

Just use ASCII arithmetic. As long as you have infinite RAM and time you'll get your answer...

+4  A: 

however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster.

Apart from the fact that precise representation of all numbers is impossible: limited-precision floating point numbers are used as the best tool all the time in natural science and engineering calculations (mainly the numerical solving of differential equations). Using integers is actually a rare exception in these fields.

Your view stems from the overwhelming primacy of discrete mathematics and discrete entities in computer science (cryptography, combinatorics) and in the kind of systems most developers build (i.e. work flow, accounting and inventory apps of one kind or another).

But that is a later development. Computers used to be primarily intended for engineering calculations. If you look at the original von Neumann papers that the architecture of all today's computers is based on, you'll see that one of the first things they discuss (and which influence the whole architecture) is how many digits of precision are needed for differential equations. He concludes that 27 binary digits are necessary - round that up to the nearest power of two and you get today's ubiquitous 32 bit float.

Basically, the better support for limited-precision floating point types over the kind of arbitrary-precision decimal floats that are more useful for the kind of applications most developers work on today is a historical holdover from the times when it was not so.

Another field where speed matters far more than limited precision are 3D games, by the way. Of course, these are at heart simulations and thus somewhat related to scientific computing.

It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct.

You'd rather wait a billion years for a perfect answer than get one that's guaranteed to be within 5% of the optimum in under a second? The world doesn't care what people think it "should" be like.

Michael Borgwardt
A: 

Fixed Decimal has been an alternative to floating point numbers for years. It's essentially an integer with some number of decimal digits to the right of the decimal point. It used to be popular with Cobol and PL/I for financial calculations because there is no roundoff or truncation error. Double precision floating point operations are now precise enough for financial applications, and hardware floating point operations have made the speed more than acceptable.

That said, there are still cases with floating point operations where you need to beware of roundoff and truncation errors.

xpda
A: 

Why are floating point values so prolific? So, title says it all.

Actually, it doesn't say it all. But I'll fill in the other half of the question for you.

Why use floating-point instead of exact arithmetic?

Efficiency, mostly. For example, Guido van Rossum's explanation of why Python prefers float over Fraction:

Numbers are one of the places where I strayed most from ABC. ABC had two types of numbers at run time; exact numbers which were represented as arbitrary precision rational numbers and approximate numbers which were represented as binary floating point with extended exponent range. The rational numbers didn’t pan out in my view. (Anecdote: I tried to compute my taxes once using ABC. The program, which seemed fairly straightforward, was taking way too long to compute a few simple numbers. Upon investigation it turned out that it was doing arithmetic on numers with thousands of digits of precision, which were to be rounded to guilders and cents for printing.) For Python I therefore chose a more traditional model with machine integers and machine binary floating point. In Python's implementation, these numbers are simply represented by the C datatypes of long and double respectively.

Why use floating-point over fixed-point?

Because, as jalf mentioned, "You need to be able to represent values of very different magnitudes."

Why use base-2 instead of base-10?

According to your edit, this is your real question.

The important thing to remember is that base ten has no inherent advantage over other bases. It's just an arbitary convention that humans came up with because we have ten fingers. A lot of people don't realize this, and can't think outside base 10, and ask "Why does 0.1 get displayed as 0.10000000000000001?" on here once a week.

There are reasons to prefer other bases. For example, there are people who advocate base twelve based mostly on the grounds that it's better at representing fractions like 1/3. Computers use base 2 because it's easy to model in electronics: The digits can be represented by the two states "on" and "off", and arithmetic is easy to model in terms of logic gates (a half-adder is just a XOR gate and an AND gate, and a 1-bit multiplier is simply an AND gate).

Given the fact that all numbers ultimately have to be represented in binary on a computer, using base-2 floating-point has several advantages. The obvious one is that it's simpler to implement in hardware.

Another is that binary arithmetic is more accurate for the same number of bits. There are two reasons for this:

  • Base-2 is unique in allowing the use of a hidden bit, which adds 1 bit of precision to a number.
  • Base-2 allows all 2^n possible values of the significand to represent numbers. Base 10 "wastes" some values. For example, 10 bits can represent 1024 values if used as a binary type, but only 1000 values if used as 3-digit decimal value.

A 50-bit significand gives 51 bits of precision in binary, but only 15 digits (=49.88 bits) in decimal.

You may have bought into the idea that "decimal is more accurate", but this is true only for a specific case: When you need a EXACT representation of a decimal fraction like 0.01, which in practice usually means financial applications.

In applications where the numbers represent physical measurements, which are never measured exactly, base-2 is preferable because of its higher precision, and more importantly, because a lot of hardware, including PCs, has support for binary floating-point but not decimal arithmetic, which means decimal arithmetic is an order of magnitude slower, which is often an important consideration.

dan04