Hi, I'm trying to multiply A*B
in 16-bit fixed point, while keeping as much accuracy as possible. A
is 16-bit in unsigned integer range, B
is divided by 1000 and always between 0.001
and 9.999
. It's been a while since I dealt with problems like that, so:
- I know I can just do
A*B/1000
after moving to 32-bit variables, then strip back to 16-bit - I'd like to make it faster than that
- I'd like to do all the operations without moving to 32-bit (since I've got 16-bit multiplication only)
Is there any easy way to do that?
Edit: A
will be between 0 and 4000, so all possible results are in the 16-bit range too.
Edit: B
comes from user, set digit-by-digit in the X.XXX
mask, that's why the operation is /1000
.