I don't use Tcl in my daily work. However, I have a colleague who occasionally interacts with a customer who wishes our tool's extension language worked more like Tcl (!). One topic he brought up was how Tcl let him set how much precision was stored in a double, via a global variable, tcl_precision.
I did some web searches, and the documentation I found certainly seems to suggest that this is the case (rather than just setting the print precision). However, it looks as if tcl_precision has had a checkered history. I get the impression that it was removed entirely for a version or two, and then put back, but with warnings and tut-tuts about overriding the default value, 0, which really means 17 (which the manual promises is enough to represent any IEEE 754 double).
So can anyone tell me more about what tcl_precision actually promises to do, and what effect it has on doubles under-the-covers? Is it just a global setting for printing numbers, or does it in fact truncate the precision of numbers as stored (which seems dangerous to me)?