views:

367

answers:

3

I'm probably completely wrong, and I don't really know anything about it, but I have a question about decimal number data types in programming languages. I understand that floats aren't completely precise, because they're stored in binary with a power or something, but I always wondered why decimal number data types don't just store a number as if there was no decimal, so do calculations as if there wasn't a decimal, and then add it in after. Like in this situation:

2.159 * 3.507 --> 2159 * 3507 = 7571613
  ^^^     ^^^
  123     456

6 decimals in total... 7571613 -> 7.571613
                        ^^^^^^
                        654321

so 2.159 * 3.507 = 7.571613

Why can't it work like that, instead of using fractions? Sorry, I don't know anything about it at all, the answer is probably obvious. Thanks for explaining.

+3  A: 

This is called "fixed-point arithmetic" People do it all the time.

See http://gameprogrammer.com/4-fixed.html

S.Lott
+1 for S. Lott.Keep in mind the trade-off: The exact calculations you get with fixed point arithmetic come at the cost of a smaller range of values you can represent in the same number of bits. S. Lott's gameprogrammer.com link brings back memories of the 80386 and earlier processors (and the 80487SX)... where hardware-based, floating-point processors were an expensive option most user's didn't have. Now, by contrast, the floating point processor on your graphics card may be faster than your CPU: http://boinc.berkeley.edu/cuda.php
Eric J.
+1  A: 

There are quite a few implementations of fixed-point arithmetic. However, we often run out of decimal places very, very quickly with fixed-point storage. It's ideal for monetary transactions, where we know that we aren't going to store/care about any irrational numbers.

Additionally, for a lot of other things, fixed-point arithmetic just isn't worth the overhead. Floating point is just a lot faster.

Things to read:

Eric
+6  A: 

That's exactly what they do. A floating-point number is stored in exponent form. Let's assume that we're working on a decimal-based computer so I don't have to change all these numbers to binary.

You're multiplying 2.159 * 3.507, but in actuality 2.159 is stored as 2159 * 10^-3 and 3.507 is stored as 3507 * 10^-3. Since we're working on a decimal-based system, the 10 is assumed, so we only really have to store -3 without the 10, like this: 2159,-3 or 3507,-3. The -3 is the location of the "floating point": as the point moves left the floating point decreases (.3507 is stored as 3507,-4) and as the point moves right the floating point increases (35.07 is stored as 3507,-2).

When you multiply the two together, the decimal number (or the binary number on a binary computer) is the only thing that gets multiplied. The floating point gets added! So behind the scenes what happens is:

2.159 * 3.507
2159,-3 * 3507,-3
2159 * 3507,-3 + -3
7571613,-6

7571613,-6 is just 7571613 * 10^-6 (remember we can assume the 10 because we're working on a decimal computer) which is the same as 7.571613.

Of course, the floating point doesn't have to be -3, it could be anything that fits into the storage:

21590 * .3507
2159,1 * 3507,-4
2159 * 3507,1 + -4
7571613,-3
7571.613

And of course, most computers don't store things in decimal, so the actual numbers would be all in binary, and the floating point would be something like 2^-9 -> -9 rather than 10^-3 -> -3. But you get the idea.

Imagist
then how come they aren't always precise like in say python 2.5.1 the float 1.1 gets turned into 1.1000000000000001?
Mk12
@Mk12 A base system can only represent fractions exactly when the denominator has common factors with the base. For example, `1/8` can be represented exactly in base `10` because `8` factors to `2*2*2` and `10` factors to `5*2` (`2` is common to both). However, `1/3` can't be represented because `3` factors to `3`. If you have infinite storage space you can store it by repeating forever, but computers don't have infinite storage space.
Imagist
@Mk12 (continued) Since the computer stores things in base `2`, only numbers that contain only factors of `2` in the denominator can be represented. In the case of 1.1, it's actually `11/10`. `10` factors to `5*2`, and the `5` part of that can't be represented in base `2` without a repeating number that goes on infinitely. The computer repeats the number for a little while, but eventually it has to stop. Wherever it stops, it rounds the last digit. It's basically like assuming .6 repeating is the same as .66667 because you can only store five digits.
Imagist