The ISO C standard allows three encoding methods for signed integers: two's complement, one's complement and sign/magnitude.
What's an efficient or good way to detect the encoding at runtime (or some other time if there's a better solution)? I want to know this so I can optimise a bignum library for the different possibilities.
I plan on calculating this and storing it in a variable each time the program runs so it doesn't have to be blindingly fast - I'm assuming the encoding won't change during the program run :-)