views:

321

answers:

7

All the methods in System.Math takes double as parameters and returns parameters. The constants are also of type double. I checked out MathNet.Numerics, and the same seems to be the case there.

Why is this? Especially for constants. Isn't decimal supposed to be more exact? Wouldn't that often be kind of useful when doing calculations?

+13  A: 

This is a classic speed-versus-accuracy trade off.

However, keep in mind that for PI, for example, the most digits you will ever need is 41.

The largest number of digits of pi that you will ever need is 41. To compute the circumference of the universe with an error less than the diameter of a proton, you need 41 digits of pi †. It seems safe to conclude that 41 digits is sufficient accuracy in pi for any circle measurement problem you're likely to encounter. Thus, in the over one trillion digits of pi computed in 2002, all digits beyond the 41st have no practical value.

In addition, decimal and double have a slightly different internal storage structure. Decimals are designed to store base 10 data, where as doubles (and floats), are made to hold binary data. On a binary machine (like every computer in existence) a double will have fewer wasted bits when storing any number within its range.

Also consider:

System.Double      8 bytes    Approximately ±5.0e-324 to ±1.7e308 with 15 or 16 significant figures
System.Decimal    12 bytes    Approximately ±1.0e-28 to ±7.9e28 with 28 or 29 significant figures

As you can see, decimal has a smaller range, but a higher precision.

John Gietzen
nice quote! Where did you find that?
Jrud
http://web.sbu.edu/math/PiDay.html
John Gietzen
Very nice indeed!
Svish
Amusing analysis on digits of pi, but just to be contrary: What if I want to measure, not the number of protons it would take to make a circle around the universe, but the number of quarks it would take to fill the volume of the universe? And the universe is expanding, so how long until we would need 42 digits?
Jay
And on the mildly serious side, I think far fewer than 41 digits have "practical value". I wonder what the most digits of pi are that were ever needed for any real-world application?
Jay
@Jay: Probably 15 or 16, as in the case of a double. ;)
John Gietzen
+3  A: 

Decimal is more precise but has less of a range. You would generally use Double for physics and mathematical calculations but you would use Decimal for financial and monetary calculations.

See the following articles on msdn for details.

Double http://msdn.microsoft.com/en-us/library/678hzkk9.aspx

Decimal http://msdn.microsoft.com/en-us/library/364x0z75.aspx

Robin Day
-1 because decimal is no more precise than a double. It depends on what value you are trying to represent. try to represent the binary rational number 0.00101101 x 2^(-00011011001). you will find that a double can represent it with 100% acccuracy, a decimal cannot.
Charles Bretana
I changed the word from "exact" as used in the question to "precise" as used by the MSDN articles. Precise is the precision to which the number is represented. Decimal has a greater precision than Double....
Robin Day
took off by downvote, but would still make the (i'll admit picky) point that regardless of the precision (number of digits) the exactness of the computer representation of a value depends more on which real value you are trying to represent, than it does on how many digits you get to represent it with. But +1 for making distinction between physics/math and financial/monetary applications
Charles Bretana
Charles, you are indeed making the very common error of confusing *precision* with *accuracy*. A figure can be very precise without being accurate: I am 1.4293859838 metres tall -- extremely precise, not at all accurate. Or, I am 1.78 metres tall -- not at all precise, but rather more accurate. Neither figure is exact.
Eric Lippert
@Eric, Your statement above cannot be evaluated w/o knowing what your real height is... (Are you a short person?) But I do understand the difference. If I said the sun was 1.5782371876433124165413 inches away, that is extremely precise, but not very accurate. But (to us your example), 1.78 could be equally precise if it was actually 1.7800000000000000000000000. Precision 'allows' us to get more accuracy, because the higher the precision, the smaller the spaces are between adjacent 'representable' values. 'Accuracy' brings in the distance between the exact true value and the represented value.
Charles Bretana
A: 

If I were to hazard a guess, I'd say those functions leverage low-level math functionality (perhaps in C) that does not use decimals internally, and so returning a decimal would require a cast from double to decimal anyway. Besides, the purpose of the decimal value type is to ensure accuracy; these functions do not and cannot return 100% accurate results without infinite precision (e.g., irrational numbers).

Dan Tao
+3  A: 

No, - decimals are no more "exact" than doubles, or forthat matter, any type. The concept of "exactness", (when speaking about numerical representations in a compuiter), is what is wrong. Any type is absolutely 100% exact at representing some numbers. unsigned bytes are 100% exact at representing the whole numbers from 0 to 255. but they're no good for fractions or for negatives or integers outside the range.

Decimals are 100% exact at representing a certain set of base 10 values. doubles (since they store their value using binary IEEE exponential representation) are exact at representing a set of binary numbers. Neither is any more exact than than the other in general, they are simply for different purposes.

To elaborate a bit furthur, since I seem to not be clear enough for some readers...

If you take every number which is representable as a decimal, and mark every one of them on a number line, between every adjacent pair of them there is an additional infinity of real numbers which are not representable as a decimal. The exact same statement can be made about the numbers which can be represented as a double. If you marked every decimal on the number line in blue, and every double in red, except for the integers, there would be very few places where the same value was marked in both colors. In general, for 99.99999 % of the marks, (please don't nitpick my percentage) the blue set (decimals) is a completely different set of numbers from the red set (the doubles).

This is because by our very definition for the blue set is that it is a base 10 mantissa/exponent representation, and a double is a base 2 mantissa/exponent representation. Any value represented as base 2 mantissa and exponent, (1.00110101001 x 2 ^ (-11101001101001) means take the mantissa value (1.00110101001) and multiply it by 2 raised to the power of the exponent (when exponent is negative this is equivilent to dividing by 2 to the power of the absolute value of the exponent). This means that where the exponent is negative, (or where any portion of the mantissa is a fractional binary) the number cannot be represented as a decimal mantissa and exponent, and vice versa.

For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the blue decimals, or to one of the red doubles.

Charles Bretana
+1, Always use the data type that that represents what you need best...
Robin Day
Bah, -1 for pedantry. Decimal data types can represent a DECIMAL fraction precisely, whereas floating point data types may or may not be able to. As our users almost always use decimal fractions, this is what matters. If someday you are called on to write software for a group of aliens who routinely use binary fractions, then float would be just as "exact". For measurements, it is arguably a moot point, as there is no such thing as an "exact measurement" anyway. Though even there, if the user enters "7.3", they presumably expect to get "7.3" back, and not "7.3000000001".
Jay
This is not pedentry Jay. This is less than the minimum you *need* to understand in order to do floating point math accurately and efficiently. The purpose of doubles is to enable fast calculation of physical quantities where the representation error is far less than the measurement error; understanding that is crucial.
Eric Lippert
-1. Charles, you are saying nonsense. You are trying to take the concept of "numbers" or "values" (you seem to use them interchangeably) and class them inside at least 2 groups: the "base 10 values" and the "binary numbers". Totally nonsense.
Bruno Reis
@Jay, in math, there are indeed exact numbers. In the real world, we can only 'measure' real things to a certain precision. As soon as anyone begins to talk about accuracy, however, the actual value of whatever it is you are measuring comes into play. And 'real' users DO NOT use decimal values, they measure real things. Real things are not decimal or binary. They have a real value. Whatever that real value is, it is a toss up as to whether the closest binary representation, or the closest decimal representation, will be closer to the actual real value.
Charles Bretana
@Bruno. it's not nonsense at all. If you take every number which is representable as a decimal, between every adjacent pair of them there are an infinity of real numbers which are not representable as a decimal. Same is true for the doubles. But these are completely different sets of numbers. For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the decimals, or to one of the doubles.
Charles Bretana
@Charles: I didn't say that there are no exact numbers "in math". I said that there are no "exact measurements". If you ask me, "How many thumbtacks are in your top desk drawer?" I can give you an exact answer, because this is counting and not measurement. But if you ask me how much those thumbtacks weigh, I can only answer to some level of precision. A better scale could give a more precise answer. If I say they weigh, say, 232.7 grams, a better scale may be able to say it is closer to 232.749 grams, etc. This is clearly not true in counting: (continued ...)
Jay
A "better counter" would not be able to say, Aha, there are really 14.2 thumbtacks. The answer 14 may be wrong, maybe I mis-counted and there are really 15. But an integer is as precise as the answer can possibly be.
Jay
@Charles: "For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the blue decimals, or to one of the red doubles." Umm, no actually. Any number that can be exactly expressed as a binary fraction can also be exactly expressed as a decimal fraction, but the reverse is not true. Express each as a product of primes: 2=2 (easy enough), 10=2*5. Take any binary fraction, express as x/2^n, and you can convert to a decimal fraction by multiplying by 5^n/5^n. But there is no way to do this in reverse if the numerator of the decimal fraction ...
Jay
(continued) is not divisible by 5^n. So for example binary .1 = decimal .5, binary .01 = decimal .25, binary, binary .001 = decimal .125, etc. But decimal .1 ... right off the bat there's no exact binary equivalent. Your statement would be true if you were comparing two bases that were relatively prime, like base 10 and base 3.
Jay
But in any case, that's not the point. Yes, it's true that the concept of a "real number" exists independent of the base in which it is expressed, just as it is independent of the notation, e.g. Arabic versus Roman numerals. But in practice, people use decimal. They obtain a quantity in the real world that they express as "7.43". They enter this into the computer. They expect it to come back out as "7.43", not as "7.429999996". If the number truly is an exact number -- like a monetary amount or the result of a calculation on exact numbers, then converting it to binary gives an answer that ...
Jay
... is clearly and unambiguously WRONG. Yes, mathematically the number could be expressed in binary or any other base as an exact value if we had an infinite number of digits. But we don't have an infinite number of digits, so the answer is simply wrong. You can explain to the user that it is accurate to some level of precision, but so what? This doesn't help in the real world.
Jay
@Jay, yes, on this point we agree. There is a fundemental difference between 'counting' and 'measuring'. But I didn't say you said that (this is getting weirdly cicular) I am pointing out that your use of the phrase "exact measurement" is mis-directing. My point is that any representation of a value in a computer, be it binary or decimal, purports to be an 'exact number', and, as you say, when measuring, no measureed val;ue is exact, i.e., the chances that the 'real' (unmeasured) value is an exact match to either binary or decimal representation is zero.
Charles Bretana
@Jay, You are basing your judgment that it is WRONG on the user's expectations? So if I expect it to be 7.44 (My expectations are totally in my head, not anywhere else) then the decimal represetnted and returned value of 7.43 would be wrong?? What I am trying to do is make the point that it is your EXPECTATION (that the decimal representation is somehow better), that is WRONG, not the computer.
Charles Bretana
@Jay, the point is that the REAL number (not the WRONG decimal one that a user types in, nor the WROMNG binary one the computer "converts" it to), cannot be represented EXACTLY in a computer by either Decimal or binary representations, so it is your EXPECTATION that it can which is WRONG.
Charles Bretana
@Jay, the point is that the REAL number (not the WRONG decimal one that a user types in, nor the WRONG binary one the computer "converts" it to), - that real number can not be represented EXACTLY in a computer by either a decimal or binary representation, so it is your EXPECTATION that it can which is WRONG. It CAN be represented to whatever degree of percision is necessary, and that is suffucient, whether it looks like 4.73 or 4.729999999999999999999.
Charles Bretana
RE "chances that the real value [of a measurement] is an exact match to either binary or decimal is zero": Absolutely. We agree on something! Break out the champagne!
Jay
RE expectations: There are reasonable expectations and there are unreasonable expectations. (a) A user says that when he types in a customer name of "Fred" he expects the computer to know who he means and automatically retrieve the last name, address, etc. That is unreasonable. (b) The user says that when he types in the customer name of "Fred", he expects that if he comes back tomorrow and retrieves the same record, and no one else has edited the record, that it will still show "Fred" and the computer will not just decide to change it to "Ferd" or "Martha". That is reasonable.
Jay
(continued) Likewise, if the user types in that the amount of a sale was $7.43, it is completely reasonable to expect that if he retrieves that record tomorrow, it will still show $7.43, and not $7.42999996. Even if we are talking about a value that cannot be exact, like a measurement of the weight of a chemical sample, I think it is completely reasonable for a user to say that if he types in that the weight is 7.43 grams, that when he retrieves that record tomorrow it will say 7.43 grams, and not 7.42999996 grams.
Jay
(continued ... these comment blocks are too short!) There are scientific conventions for how you express the number of significant digits in a measurement. Conversions to binary fractions and back destroy these convensions. If I type in 7.43 grams, that means that the measurement is accuracte to 3 significant figures. If the computer regurgitates this as 7.4299996 grams, that implies 8 significant figures. Even if the user knows that these transformations are happening and where it comes from, how does he know if the actual number is 7.4299996, 7.43, or some other value? We could ...
Jay
(continued) Because scientists and engineers are used to working in decimal, they express the number of significant figures in decimal, not binary. If they worked in binary, this wouldn't be a problem. But they don't. The point is that using floats creates problems that don't exist outside the computer. Maybe the performance advantage or some other reason makes it worth putting up with the problems, but we can't deny that the problems exist. In a pure theoretical world where we had an infinite number of digits, the base wouldn't matter. But we don't have an infinite number of digits.
Jay
Hey, do we get any bonus points for longest run of back-and-forth comments?
Jay
@Charles: Hey, I'll give you some up votes for a fun argument, anyway.
Jay
@Jay, Agree with all you said, but would point out that my original intent was to make exactly that distinction. That when counting things (and arguably $4.73 -or 473 pennies is counting not measuring) it is reasonable to expect that the compiuter will not change the value, but that when measuring things like 7.7332 lbs, or 6,340.0012 meters, it is NOT reasonable to expect, (nor necessary), that the computer will retain the EXACT value.
Charles Bretana
RE: decimal vs Binary. So, again making the didstinction between counting and measuring, any scientist who is measuring, and has an issue with the computer representing 4.73 as 4.73000000000000000000001 or 4.729999999999999999999999999 has an issue with his/her understanding of what he/she is doing. This is true rearadless of the fact that they think in decimal or enter the values in decimal.
Charles Bretana
It is porecisely because all values in a 'measuring' scenario are only accurate to some level of +/- precision that it is important that the scientist understand this. How these uncertainties propogate, and are magnified by arithmetic operations is itself an independant disipline he/she should be aware of. (As I would guess you know from your posts above) .............. And I thank you for the lively interchange as well!
Charles Bretana
A: 

Our friend Tony the Pony has written some comments about this here

SwDevMan81
A: 

Neither Decimal nor float or double are good enough if you require something to be precise. Furthermore, Decimal is so expensive and overused out there it is becoming a regular joke.

If you work in fractions and require ultimate precision, use fractions. It's same old rule, convert once and only when necessary. Your rounding rules too will vary per app, domain and so on, but sure you can find an odd example or two where it is suitable. But again, if you want fractions and ultimate precision, the answer is not to use anything but fractions. Consider you might want a feature of arbitrary precision as well.

The actual problem with CLR in general is that it is so odd and plain broken to implement a library that deals with numerics in generic fashion largely due to bad primitive design and shortcoming of the most popular compiler for the platform. It's almost the same as with Java fiasco.

double just turns out to be the best compromise covering most domains, and it works well, despite the fact MS JIT is still incapable of utilising a CPU tech that is about 15 years old now.

[piece to users of MSDN slowdown compilers]

rama-jka toti
A: 

Double is a built-in type. Is is supported by FPU/SSE core (formerly known as "Math coprocessor"), that's why it is blazingly fast. Especially at multiplication and scientific functions.

Decimal is actually a complex structure, consisting of several integers.

yk4ever