views:

696

answers:

13

I have seen many people uses base16 numbers in code where a base10 number is more readable. Here is a c# code.

byte[] b = new byte[0x1000];

For me the below is more readable,

byte[] b = new byte[4096];

Is there any good reason for using base16 numbers or is it a matter of preference?

+3  A: 

It's certainly a matter of preference. However, people often use hexadecimal numbers if they want to express that the value is a power of two.

Martin v. Löwis
+12  A: 

If the "bits" of the number mean something then base 16 is easier to read. Example using 0x7FFFFFFF for max int is more readable than 2147483647.

David Tinker
However, the constant Int32.MaxValue (or just int.MaxValue) is even easier to read. ;)
Simon Svensson
+2  A: 

One of the reasons I can think of is that HEX numbers translate more directly to binary numbers (each HEX digit are for BINARY DIGITS). So, when working with flags or binary protocols you gain in readability on your code.

I.E.

0x3fa2

is

0011 1111 1010 0010
Pablo Santa Cruz
+24  A: 

I'd expect to see hex values in scenarios where you need to understand the underlying bit/byte structure of the number e.g.

int red = 0xff0000; // rgb values, one per byte

flags &= 0xff; // mask out some bytes

In your above example, the hex usage seems a little gratuitous. You're really interested in the length of the buffer rather than the structure of the number, and having to translate from hex to decimal seems like an unnecessary hurdle.

Brian Agnew
+1  A: 

Base 16 numbers represent better the underlying bits of a number. Each number in hexadecimal is equivalent to four bits.

Example:

  • base 2: 1111 1111
  • base 10: 255
  • base 16: 0xFF
fjsj
+2  A: 

In this example, it's a lot easier not to make a mistake in enumerating a real power of two. In base 10, you might accidentally type:

byte[] b = new byte[4069];

Oops! But in base 16, it's pretty much impossible to make that mistake, since the representation is simpler.

Adam Bellaire
A: 

It is a matter of preference but I've always found that if I'm working with data on the bit or byte level that hex is easier to read.

Austin Salonen
+1  A: 

Representing values in base 16 (hexadecimal) numbers can make more sense for some uses. For example; if you were defining constants for a bitfield:

const int BIT_0 = 0x1000;
const int BIT_1 = 0x2000;
const int BIT_2 = 0x4000;
const int BIT_3 = 0x8000;

is much more readable than the decimal equivalent:

const int BIT_0 = 4096;
const int BIT_1 = 8192;
const int BIT_2 = 16384;
const int BIT_3 = 32768;

In general, data which is fundamentally base-2 often is clearer when expressed in hexadecimal.

Dave Rigby
+8  A: 

As some mentioned earlier, base16 reflects better the value in the underlying bits. BUT, a dirty little secret is.. sometimes people do it just to be extra pretentiousness, without any good reason..

+1 for the dirty little secret. Perhaps I'm not enough of a geek (!) but 4096 is more immediate to me in the above example than 0x1000
Brian Agnew
Really? I would only suspect pretense if I saw Octal.
Nosredna
@Nosredna - you'd suspect 'pretentiousness' rather than 'pretense', surely ?
Brian Agnew
Pretense: 2 a : mere ostentation : pretentiousness
Nosredna
@Nosredna - apologies. You (I) learn something every day :-)
Brian Agnew
I should have gone with pretentiousness, though. I'm guilty of abusing expectations there. :-)
Nosredna
+6  A: 

In some situations, hex numbers are preferred (preference, tradition, easier to spot a potential bug). An example, when you declare bit-flags:


[Flags]
enum SomeOptions
{
  Opt1 = 0x1,
  Opt2 = 0x2, 
  Opt3 = 0x4, 
  Opt4 = 0x8,
  Opt5 = 0x10
}

It's much easier to see, which flags are combined of which ones (or if all flags are power of 2, so they are independent).
Other example:
int myNum = flags & 0x7fff;
It's easy to see, which bits you are including in your myNum, and which not.

Also, hex numbers are commonly used when you want to express some numbers, that are power of 2, as it's easier to see, if you haven't made any mistake (compare 0x10000000 and... ehm, some big number :) ).

Ravadre
I find the `1 << n` notation better at conveying that a flag has only a single bit set at position `n` (e.g. `Opt1 = 1 << 0, Opt2 = 1 << 1, Opt3 = 1 << 2, ...`). Also, for seeing "which flags are combined of which ones", using `Opt1AndOpt5 = Opt1 | Opt5` is clearer than any direct numeric assignment. Similarly, assigning the `0x7fff` bitmask value to a well-named constant will be clearer than using the numeric value directly.
Emperor XLII
+2  A: 

Previous answers have covered the big points: it's much easier to recognize the bits in base 16 (and base 8 for that matter) and makes powers of two more obvious when they really matter. Many people find "slicing" base 16 numbers much easier, when an integral field is used to encode several different values (for example RGB values)

However, custom is also a big part: people expect base 10 numbers to be used as "normal" numbers. If you see a base 10 number in code, you expect it to be added, subtracted, multiplied, etc, treated as a purely numeric value, and possibly even displayed to an end user as a purely numeric value. On the other hand, if a number is declared in hex, I'm more likely to expect that it's going to be used for bitmasks, encoding multiple values into one bit field, or other bit-level wizardry.

jlc
+1  A: 

You need to study digital theory to understand why people like to use hex.

http://www.amazon.com/Digital-Fundamentals-10th-Thomas-Floyd/dp/0132359235/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1252163418&amp;sr=8-1

The underlying system of computer is using binary system, which consist of 1 and 0. It hard to read for human beings. so we use hex to represent the binary.

for example,

0x 2F = 0010 1111 (b),

can you see the mapping relationship here?

0x 2 = 0010 (b), 0x F = 1111 (b).

another example,

0x E3 = 1110 0011 (b).

Each hex digit can be expanded into 4 binary digit.

Frankly speaking, you need to study the Digital Fundamental and do some exercise before you are comfortable with the hexadecimal representation.

janetsmith
A: 

If you're going to deal with powers of 2, Hex makes more sense from a round number perspective.

If you're going to deal in decimal, you should probably write your code to round decimal numbers where bits don't matter.

byte[] b = new byte[4000];

rather than

byte[] b = new byte[4096];

It's easier to write in hex if you're going to deal with power of 2 numbers anyways.

byte[] b = new byte[0x1000];

But it's personal preference. The brain, however, likes round numbers. They're much easier to remember (although, I must admit, 256, 4096, 16384, 32768, 65536 are burned into my brain after all these years.

Mystere Man