Python Decimal doesn't support being constructed from float, it expects that you have to convert float to a string first.
This is very inconvenient since standard string formattors for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal(""%.15f"% my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal.
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported.