views:

1658

answers:

20

Why do most computer programming languages not allow binary numbers to be used like decimal or hexadecimal?

  • In VB.NET you could write a hexadecimal number like &H4
  • In C you could write a hexadecimal number like 0x04

Why not allow binary numbers?

  • &B010101
  • 0y1010

Bonus Points!... What languages do allow binary numbers?


Edit

Wow! - So the majority think it's because of brevity and poor old "waves" thinks it's due to the technical aspects of the binary representation.

+2  A: 

Hex and octal are just shorter ways to write binary. Would you really want a 64-character long constant defined in your code?

Andrei Krotkov
Why not, if it's clearer? Often when you describing port pins or bit flags in embedded systems, binary is a much clearer way to indicate values than clumping 3- or 4-bit strings together into an octal or hex digit.
Jason S
Hmm, debatable if it's clearer :-) We do still use binary fields for things but we tend to use enum like this in VB.NET:Enum UserFlags AllowRed = 2^0 AllowGreen = 2^1End EnumAllows you do things like MyFlags = AllowRed + AllowGreen
Rob Nicholson
@Jason: I don't think that it is clearer. Humans are visually inclined to recognize different symbols much more easily than the number and positioning of similar symbols. It's much easier to visually differentiate 3, 7, C, F than 0011, 0111, 1100, 1111.
Adam Bellaire
If you're doing bit masks, yes.
Joe Philllips
-1, because this makes no sense. Hex and octal are shorter ways to write hex and octal. Not binary.
Alex Baranosky
@GordonG: Speaking of making no sense, can you explain how hex is a shorter way to write hex?
Adam Bellaire
@GordonG - Hex is binary taken in 4-bit blocks. Octal is binary taken in 3-bit blocks.
Andrei Krotkov
Binary is more readable for bitmaps.0000000000011000001111000110011001111110011001100110011000000000looks more like an 'A' than 00 18 3C 66 7E 66 66 00 does.
dan04
@dan: True, but it's a rare case when it's a good idea to store a binary directly in code.
Andrei Krotkov
+33  A: 

Because hexadecimal (and rarely octal) literals are more compact and people using them usually can convert between hexadecimal and binary faster than deciphering a binary number.

Python 2.6+ allows binary literals (0b101010), so does Ruby.

phihag
J.F. Sebastian
+10  A: 

In C++0x with user defined literals binary numbers will be supported, I'm not sure if it will be part of the standard but at the worst you'll be able to enable it yourself

int operator "" _B(int i);

assert( 1010_B == 10);
Motti
I can't wait to do complex numbers with this...
Zifre
+3  A: 

Common Lisp allows binary numbers, using #b... (bits going from highest-to-lowest power of 2). Most of the time, it's at least as convenient to use hexadecimal numbers, though (by using #x...), as it's fairly easy to convert between hexadecimal and binary numbers in your head.

Vatine
Actually, Common Lisp supports arbitrary radices from 2 to 36.
Svante
Yep, though I think only binary, octal, decimal and hexadecmial have "convenient" short-forms (the other require #<i>n</i>R... to make the reader happy).
Vatine
+1  A: 

Common wisdom holds that long strings of binary digits, eg 32 bits for an int, are too difficult for people to conveniently parse and manipulate. Hex is generally considered easier, though I've not used either enough to have developed a preference.

Ruby which, as already mentioned, attempts to resolve this by allowing _ to be liberally inserted in the literal , allowing, for example:

irb(main):005:0> 1111_0111_1111_1111_0011_1100
=> 111101111111111100111100
MHarris
+2  A: 

D supports binary literals using the syntax 0[bB][01]+, e.g. 0b1001. It also allows embedded _ characters in numeric literals to allow them to be read more easily.

Joe Gauterin
+1  A: 

for the record, and to answer this:

Bonus Points!... What languages do allow binary numbers?

Specman (aka e) allows binary numbers. Though to be honest, it's not quite a general purpose language.

Nathan Fellman
A: 

It seems the from a readability and usability standpoint, the hex representation is a better way of defining binary numbers. The fact that they don't add it is probably more of user need that a technology limitation.

A: 

I expect that the language designers just didn't see enough of a need to add binary numbers. The average coder can parse hex just as well as binary when handling flags or bit masks. It's great that some languages support binary as a representation, but I think on average it would be little used. Although binary -- if available in C, C++, Java, C#, would probably be used more than octal!

Eddie
+6  A: 

See perldoc perlnumber:

NAME
   perlnumber - semantics of numbers and numeric operations in Perl

SYNOPSIS
       $n = 1234;              # decimal integer
       $n = 0b1110011;         # binary integer
       $n = 01234;             # octal integer
       $n = 0x1234;            # hexadecimal integer
       $n = 12.34e-56;         # exponential notation
       $n = "-12.34e56";       # number specified as a string
       $n = "1234";            # number specified as a string
J.J.
A: 

In Smalltalk it's like 2r1010. You can use any base up to 36 or so.

Darius Bacon
A: 

Hex is just less verbose, and can express anything a binary number can.

Ruby has nice support for binary numbers, if you really want it. 0b11011, etc.

Alex Fort
+6  A: 

In order for a bit representation to be meaningful, you need to know how to interpret it. You would need to specify what the type of binary number you're using (signed/unsigned, twos-compliment, ones-compliment, signed-magnitude).

The only languages I've ever used that properly support binary numbers are hardware description languages (Verilog, VHDL, and the like). They all have strict (and often confusing) definitions of how numbers entered in binary are treated.

jelman
Wow, someone who knows what they're talking about! Yay!
Uh? How can that be? I'd think that a binary constant could just specify the bits, and the interpretation depends on what you are trying to do with the value.
unwind
I don't get it. What's a hex number, if not a binary sequence? It's all in the interpretation. Try this: define an int `0xFFFFFFFF` (on a 32-bit machine), then `printf` it as `%d`, then cast it to `unsigned int` and `printf` it `%u`. It's the same binary both ways, but it produces different numbers because of the interpretation.
Tim
Well, here's a sequence: 0101. Usually, you'd read this with the most-significant-bit (MSB) on the left. That means it's 8*0+4*1+2*0+1*1 = 5, five. But what if on your computer the MSB is on the right? (The bits don't care. They are just on or off.) So we'd get: 8*1+4*0+2*1+1*0 = 10, ten. It gets even worse when you think about the order of octets in an integer. On some computers, 255 in 32 bit is "00 00 00 FF", and in others it's "FF 00 00 00".
scraimer
Defining a number is independent of how it's stored. **1010** is a valid base-two number, just like **AA** is a valid base-sixteen number and **251** is a valid base-ten number. The fact that the computer doesn't actually put two `A`s or `251` in a register somewhere doesn't matter, just like it doesn't matter if the internal representation of **1010** is actually `1010`. Now if you're using the numbers for bit masking it might make a difference, but the question wasn't about that: it was about using them like other numbers.
Steve Losh
+5  A: 

Slightly off-topic, but newer versions of GCC added a C extension that allows binary literals. So if you only ever compile with GCC, you can use them. Documenation is here.

Sean Bright
A: 

In Pop-11 you can use a prefix made of number (2 to 32) + colon to indicate the base, e.g.

2:11111111 = 255

3:11111111 = 3280

16:11111111 = 286331153

31:11111111 = 28429701248

32:11111111 = 35468117025

A: 

Although it's not direct, most languages can also parse a string. Java can convert "10101000" into an int with a method.

Not that this is efficient or anything... Just saying it's there. If it were done in a static initialization block, it might even be done at compile time depending on the compiler.

If you're any good at binary, even with a short number it's pretty straight forward to see 0x3c as 4 ones followed by 2 zeros, whereas even that short a number in binary would be 0b111100 which might make your eyes hurt before you were certain of the number of ones.

0xff9f is exactly 4+4+1 ones, 2 zeros and 5 ones (on sight the bitmask is obvious). Trying to count out 0b1111111110011111 is much more irritating.

I think the issue may be that language designers are always balls-deep in hex/octal/binary/whatever and just think this way. If you are less experienced, I can totally see how these conversions wouldn't be as obvious.

Hey, that reminds me of something I came up with while thinking about base conversions. A sequence--I didn't think anyone could figure out the "Next Number", but one guy actually did, so it is solvable. Give it a try:

10 11 12 13 14 15 16 21 23 31 111 ?

Edit: By the way, this sequence can be created by feeding sequential numbers into single built-in function in most languages (Java for sure).

Bill K
+1  A: 

Every language should support binary literals. I go nuts not having them!

Bonus Points!... What languages do allow binary numbers?

Icon allows literals in any base from 2 to 16, and possibly up to 36 (my memory grows dim).

Norman Ramsey
+2  A: 

Java 7 now has support for binary literals. So you can simply write 0b110101. There is not much documentation on this feature. The only reference I could find is here.

Wilfred Springer
+2  A: 

While C only have native support for 8, 10 or 16 as base, it is actually not that hard to write a pre-processor macro that makes writing 8 bit binary numbers quite simple and readable:

#define BIN(d7,d6,d5,d4, d3,d2,d1,d0)                      \
(                                                          \
    ((d7)<<7) + ((d6)<<6) + ((d5)<<5) + ((d4)<<4) +        \
    ((d3)<<3) + ((d2)<<2) + ((d1)<<1) + ((d0)<<0)          \
)
int my_mask = BIN(1,1,1,0, 0,0,0,0);

This can also be used for C++.

hlovdal
A: 

Forth has always allowed numbers of any base to be used (up to size limit of the CPU of course). Want to use binary: 2 BASE ! octal: 8 BASE ! etc. Want to work with time? 60 BASE ! These examples are all entered from base set to 10 decimal. To change base you must represent the base desired from the current number base. If in binary and you want to switch back to decimal then 1010 BASE ! will work. Most Forth implementations have 'words' to shift to common bases, e.g. DECIMAL, HEX, OCTAL, and BINARY.

tgunr