If the compiler in question is the mspgcc, it should put out an assembler listing of the compiled program together with the binary/hex file. Other compilers may require additional compiler flags to do so. Or maybe even a separate disassembler run on the binary.
This is the place where to look for an explanation.
Due to compiler optimizations, the actual code presented to the processor might have not much similarity to the original C code (but normally does the same job).
Stepping through the few assembler instructions representing the faulty code should reveal the cause of the problem.
My guess is that the compiler somehow optimizes the whole calculation sice the defined constant is a known part at compile time.
255*x could be optimized to x<<8-x (which is faster and smaller)
Maybe something is going wrong with the optimized assembler code.
I took the time to compile both versions on my system. With active optimization, the mspgcc produces the following code:
#define PRR_SCALE 255
uint8_t a = 3;
uint8_t b = 4;
uint8_t prr;
prr = (PRR_SCALE * a) / b;
40ce: 3c 40 fd ff mov #-3, r12 ;#0xfffd
40d2: 2a 42 mov #4, r10 ;r2 As==10
40d4: b0 12 fa 6f call __divmodhi4 ;#0x6ffa
40d8: 0f 4c mov r12, r15 ;
printf("prr: %u\n", prr);
40da: 7f f3 and.b #-1, r15 ;r3 As==11
40dc: 0f 12 push r15 ;
40de: 30 12 c0 40 push #16576 ;#0x40c0
40e2: b0 12 9c 67 call printf ;#0x679c
40e6: 21 52 add #4, r1 ;r2 As==10
As we can see, the compiler directly calculates the result of 255*3 to -3 (0xfffd). And here is the problem. Somehow the 255 gets interpreted as -1 signed 8-bit instead of 255 unsigned 16 bit. Or it is parsed to 8 bit first and then sign-extended to 16 bit. or whatever.
A discussion on this topic has been started at the mspgcc mailing list already.