I'm probably completely wrong, and I don't really know anything about it, but I have a question about decimal number data types in programming languages. I understand that floats aren't completely precise, because they're stored in binary with a power or something, but I always wondered why decimal number data types don't just store a number as if there was no decimal, so do calculations as if there wasn't a decimal, and then add it in after. Like in this situation:
2.159 * 3.507 --> 2159 * 3507 = 7571613
^^^ ^^^
123 456
6 decimals in total... 7571613 -> 7.571613
^^^^^^
654321
so 2.159 * 3.507 = 7.571613
Why can't it work like that, instead of using fractions? Sorry, I don't know anything about it at all, the answer is probably obvious. Thanks for explaining.