A lot of the answers to the questions about the accuracy of float
and double
recommend the use of decimal
for monetary amounts. This works because today all currencies are decimal except MGA and MRO, and those have subunits of 1/5 so are still decimal-friendly.
But what about the software used in U.S. stock markets when prices were in 1/16ths of dollar? The accuracy of binary data types wouldn't have been an issue, right?
Going further back, how did pre-1971 British accounting software deal with pounds, shillings, and pence? Did their versions of COBOL have a special PIC
clause for it? Were all amounts stored in pence? How was decimalisation handled?