If a 32-bit processor is, indeed, really only 32 bits in length, then how can math operations work on 64-bit numbers? For example:
long lngTemp1 = 123456789123;
long lngTemp2 = lngTemp1 * 123;
According to MSDN, a long in C# is a signed 64-bit number: http://msdn.microsoft.com/en-us/library/ctetwysk(VS.71).aspx
How is it that a 32-bit Intel Microprocessor can execute code, like the above without getting an overflow?