views:

1409

answers:

7

So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly:

from decimal import Decimal
>>> Decimal('1.0') + Decimal('2.0')
Decimal("3.0")

But this doesn't:

>>> Decimal('1.00') / Decimal('3.00')
Decimal("0.3333333333333333333333333333")

So two questions:

  1. Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math?
  2. Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.
A: 

Decimal defaults to 28 places of precision.
The only way to limit the number of digits it returns is by altering the precision.

Dre
That's not necessarily always true. For example: >>> Decimal('1.0') * Decimal('1.0') yields Decimal("1.00") Are you perhaps saying this in the context of division?
Jason Baker
+2  A: 

Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try

decimal.getcontext().prec=2

EDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps).

Hmmm... so there's no way to have it do that without requiring me to set the precision manually?
Jason Baker
A: 

If I undertand Decimal correctly, the "precision" is the number of digits after the decimal point in decimal notation.

You seem to want something else: the number of significant digits. That is one more than the number of digits after the decimal point in scientific notation.

I would be interested in learning about a Python module that does significant-digits-aware floating point point computations.

ddaa
+1  A: 

What's wrong with floating point?

>>> "%8.2e"%  ( 1.0/3.0 )
'3.33e-01'

It was designed for scientific-style calculations with a limited number of significant digits.

S.Lott
The problem is that I would still be setting the number of significant digits manually.
Jason Baker
I sure don't see how setting the significant digits is a problem. You'll have to expand your question to show what you want and why you can't/don't want to set the significant digits.
S.Lott
+5  A: 

Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation.

You should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits.

Once you've arrived at the final result, what you probably want is quantize:

>>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal("0.001"))
Decimal("0.333")

You have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding.

You may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic?

fredrikj
What about 300/100? Your code would result incorrectly to 3.000
Pyrolistical
+1  A: 

Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try:

def lround(x,leadingDigits=0): 
    """Return x either as 'print' would show it (the default) 
    or rounded to the specified digit as counted from the leftmost 
    non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033
    """ 
    assert leadingDigits>=0 
    if leadingDigits==0: 
            return float(str(x)) #just give it back like 'print' would give it
    return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format

The numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange:

>>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4))
(0.33000000000000002, '0.33', '0.3333')
A: 

Take a look at my answer here: stackoverflow thread

It isn't python but might point you in the right direction.

Ben