Why are Hexadecimal Prefixed as 0x and not anything else? I understand the usage of prefix but I dont understand the significance of 0x.
It appears to have stemmed from a C convention: http://en.wikipedia.org/wiki/Hexadecimal#Representing_hexadecimal .
Note: I don't know the correct answer, but the below is just my personal speculation!
As has been mentioned a 0 before a number means it's octal:
04524 // octal, leading 0
Imagine needing to come up with a system to denote hexadecimal numbers, and note we're working in a C style environment. How about ending with h like assembly? Unfortunately you can't - it would allow you to make tokens which are valid identifiers (eg. you could name a variable the same thing) which would make for some nasty ambiguities.
8000h // hex
FF00h // oops - valid identifier! Hex or a variable or type named FF00h?
You can't lead with a character for the same reason:
xFF00 // also valid identifier
Using a hash was probably thrown out because it conflicts with the preprocessor:
#define ...
#FF00 // invalid preprocessor token?
In the end, for whatever reason, they decided to put an x after a leading 0 to denote hexadecimal. It is unambiguous since it still starts with a number character so can't be a valid identifier, and is probably based off the octal convention of a leading 0.
0xFF00 // definitely not an identifier!
0x
simply indicates that the following digits represent an integer value in hex notation (base 16). The grammar is
hexadecimal-constant: 0x hex-digit 0X hex-digit hexadecimal-constant hex-digit
where
hex-digit: one of 0 1 2 3 4 5 6 7 8 9 A B C D E F a b c d e f
Thus, 0x10
means "interpret the integer constant 10
in base 16, not base 10".
As to why C uses 0x
as opposed to some other indicator, I suspect that was an entirely arbitrary decision by the language designers. It's consistent with using a leading 0
to indicate an octal constant. I haven't found a rationale for it in any of my references or the language standard (although I haven't looked all that hard).