views:

156

answers:

4

I don't use Tcl in my daily work. However, I have a colleague who occasionally interacts with a customer who wishes our tool's extension language worked more like Tcl (!). One topic he brought up was how Tcl let him set how much precision was stored in a double, via a global variable, tcl_precision.

I did some web searches, and the documentation I found certainly seems to suggest that this is the case (rather than just setting the print precision). However, it looks as if tcl_precision has had a checkered history. I get the impression that it was removed entirely for a version or two, and then put back, but with warnings and tut-tuts about overriding the default value, 0, which really means 17 (which the manual promises is enough to represent any IEEE 754 double).

So can anyone tell me more about what tcl_precision actually promises to do, and what effect it has on doubles under-the-covers? Is it just a global setting for printing numbers, or does it in fact truncate the precision of numbers as stored (which seems dangerous to me)?

+3  A: 

I don't think your customer is correct in his understanding.

I believe that the tcl_precision variable allows you to set the number of significant digits Tcl will display when it converts a real to a string, as you suggest.

The default setting is six:

expr 1.11111111 + 1.11111111
=> 2.22222

If you set it to 10 you would get:

set tcl_precision 10
expr 1.11111111 + 1.11111111
=> 2.22222222

It doesn't affect the actual method Tcl uses to represent the real number internally, which is documented as being a C type double, providing about 15 decimal digits of precision.

Binary Nerd
+1  A: 

The tcl_precision is a bit funny as it sounds misleading.

It mainly affects how it prints the variable but not how it calculates it internally.

Internally tcl uses the C double type so the calculations are done using the same C double types regardeless of the tcl_precision value.

So if you do

set x 1.123456

set tcl_preceision 2

It will display 1.12 but internally it is still the full double value it was originally set.

If you change the tcl_precision back to say "10"

It will go back to 1.123456

So your not actually controlling the calculation precision just how it is displayed.

iQ
+1  A: 

The other answers are correct that tcl_precision controls float -> string conversion.

however the real problem (and the reason you shouldn't mess with tcl_precision) is that TCL's EIAS philosphy makes it possible that this conversion isn't just done at display i.e. you might build an expression in a string and then expr it or build a script in a string to eval. leaving tcl_precicion alone means that conversion to string is reversable, which is obviously important in these cases.

Having said that if your customer is relying on changing tcl_presicion to change computation they probably don't really know what they are doing. You may want to see if you can find out more details of what they actually want to do.

jk
+1  A: 

In Tcl 8.5, the tcl_precision global variable should (probably) be left well alone, as the default setting uses the minimum number of digits to represent the relevant IEEE double exactly. That's not the same as using a fixed number of digits, but instead does what people usually want. If they want something else, chance is they're talking about formatting the value for output; that's when you want to use:

format %6f $doubleValue

or something like that to produce a value in the actual format they want to see.

Donal Fellows