char byte_to_ascii(char value_to_convert, volatile char *converted_value) {
if (value_to_convert < 10) {
return (value_to_convert + 48);
} else {
char a = value_to_convert / 10;
double x = fmod((double)value_to_convert, 10.0);
char b = (char)x;
a = a + 48;
b = b + 48;
*converted_value = a;
*(converted_value+1) = b;
return 0;
}
}
The purpose of this function is to take an unsigned char value of 0 through 99 and return either it's ascii equivalent in the case it is 0-9 or manipulate a small global character array that can be referenced from the calling code following function completion.
I ask this question because two compilers from the same vendor interpret this code in different ways.
This code was written as a way to parse address bytes sent via RS485 into strings that can easily be passed to a send-lcd-string function.
This code is written for the PIC18 architecture (8 bit uC).
The problem is that the free/evaluation version of a particular compiler generates perfect assembly code that works while suffering a performance hit, but the paid and supposedly superior compiler generates code more efficiently at the expense of being able reference the addresses of all my byte arrays used to drive the graphics on my lcd display.
I know I'm putting lots of mud in the water by using a proprietary compiler for a less than typical architecture, but I hope someone out there has some suggestions.
Thanks.