tags:

views:

172

answers:

5

After all of this time, I've never thought to ask this question; I understand this came from c++, but what was the reasoning behind it:

  • Specify decimal numbers as you normally would
  • Specify octal numbers by a leading 0
  • Specify hexadecimal numbers by a leading 0x

Why 0? Why 0x? Is there a natural progression for base-32?

A: 

I think it 0x actually came for the UNIX/Linux world and was picked-up by C/C++ and other languages. But I don't know the exact reason or true origin.

J.Hendrix
David Thornley
Was Unix written is assembly and Linux in C?
J.Hendrix
0x is definitely post-unix and post-C. Both UNIX and C were shipping by 1976 without it.it seems to have appeared by 1978 in "The C Programming Language", 1st Ed.
Joe Koberg
Ritchie's C History: http://cm.bell-labs.com/cm/cs/who/dmr/chist.html
Joe Koberg
C was created to help build the first Unix OSes. So early on the C "world" and the Unix "world" were the same world.
T.E.D.
+3  A: 

Hi

I dunno ...

0 is for 0ctal

0x is for, well, we've already used 0 to mean octal and there's an x in hexadecimal so bung that in there too

as for natural progression, best look to the latest programming languages which can affix subscripts such as

123_27 (interpret _ to mean subscript)

and so on

?

Mark

High Performance Mark
+1 That makes sense to me too.
J.Hendrix
Same. This is precisely how the rest of C was "designed".
T.E.D.
**x** sounds like **'ex** which is 18th century Cockney for "hexadecimal numeric literal."
detly
+2  A: 

The zero prefix for octal, and 0x for hex, are from the early days of Unix.

The reason for octal's existence dates to when there was hardware with 6-bit bytes, which made octal the natural choice. Each octal digit represents 3 bits, so a 6-bit byte is two octal digits. The same goes for hex, from 8-bit bytes, where a hex digit is 4 bits and thus a byte is two hex digits. Using octal for 8-bit bytes requires 3 octal digits, of which the first can only have the values 0, 1, 2 and 3 (the first digit is really 'tetral', not octal). There is no reason to go to base32 unless somebody develops a system in which bytes are ten bits long, so a ten-bit byte could be represented as two 5-bit "nybbles".

Jim Garrison
I think the question relates to the origin of the syntax.
Joe Koberg
+1  A: 

Is there a natural progression for base-32?

This is parth of why Ada uses the form 16# to introduce hex constants, 8# for octal, 2# for binary, etc.

I wouldn't concern myself too much over needing space for "future growth" in basing though. This isn't like RAM or addressing space where you need an order of magnitude more every generation.

In fact, studies have shown that octal and hex are pretty much the sweet spot for human-readable representations that are binary-compatible. If you go any lower than octal, it starts to require a rediculous number of digits to represent larger numbers. If you go any higher than hex, the math tables get rediculously large. Hex is actually a bit too much already, but Octal has the problem that it doesn't evenly fit in a byte.

T.E.D.
+1  A: 

There is a standard encoding for Base32. It is very similar to Base64. But it isn't very convenient to read. Hex is used because 2 hex digits can be used to represent 1 8-bit byte. And octal was used primarily for older systems that used 12-bit bytes. It made for a more compact representation of data when compared to displaying raw registers as binary.

It should also be noted that some languages use o### for octal and x## or h## for hex, as well as, many other variations.

Matthew Whited