When multiplying (or doing any mathematics) to binary and decimal numbers, would you simply convert then multiply in decimals?
E.g., 3(base10) * 100(base2) would = 3 * 4 = 12?
When multiplying (or doing any mathematics) to binary and decimal numbers, would you simply convert then multiply in decimals?
E.g., 3(base10) * 100(base2) would = 3 * 4 = 12?
You would convert them to integers before multiplying, I would hope.
Thus they're all in binary.
You can multiply in any base as long as the base is the same for each operand.
In your example, you could have converted the 3(base10) to 11(base2) and multiplied:
11 * 100 = 1100
1100(Base2) = 12(base10)
Numbers are numbers. 3 * 0b100 will always equal 12, regardless of whether you use a lookup table or bit shifting to multiply them.