tags:

views:

3967

answers:

14
+24  Q: 

Why use hex?

Hey! I was looking at this code at http://www.gnu.org/software/m68hc11/examples/primes_8c-source.html

I noticed that in some situations they used hex numbers, like in line 134:

for (j = 1; val && j <= 0x80; j <<= 1, q++)

Now why would they use the 0x80? I am not that good with hex but I found an online hex to decimal and it gave me 128 for 0x80.

Also before line 134, on line 114 they have this:

small_n = (n & 0xffff0000) == 0;

The hex to decimal gave me 4294901760 for that hex number. So here in this line they are making a bit AND and comparing the result to 0??

Why not just use the number? Can anyone please explain and please do give examples of other situations.

Also I have seen large lines of code where it's just hex numbers and never really understood why :(

+66  A: 

In both cases you cite, the bit pattern of the number is important, not the actual number.

For example, in the first case, j is going to be 1, then 2, 4, 8, 16, 32, 64 and finally 128 as the loop progresses. In binary, that's 0000:0001, 0000:0010, 0000:0100, 0000:1000, 0001:0000, 0010:0000, 0100:0000 and 1000:0000. There's no option for binary constants in C or C++, but it's a bit clearer in Hex: 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, and 0x80.

In the second example, the goal was to remove the lower two bytes of the value. So given a value of 1,234,567,890 we want to end up with 1,234,567,168. In hex, it's clearer: start with 0x4996:02d2, end with 0x4996:0000.

James Curran
Minor correction on the second example: It removes the lower two bytes of a four-byte number. Removing the lower four bytes would simply be "small_n = 0;".
Dave Sherohman
D'Oh! Ya'know, I was debating between writing "4 digits" and "two bytes", so naturally I conflated them into a wrong statement.....
James Curran
+13  A: 

its a bit mask. Hex values make it easy to see the underlying binary representation. n & 0xffff0000 returns the top 16 bits of n. 0xffff0000 means "16 1s and 16 0s in binary"

0x80 means "1000000", so you start with "00000001" and continue shifting that bit over to the left "0000010", "0000100", etc until "1000000"

Jimmy
+3  A: 

Hex, or hexadecimal, numbers represent 4 bits of data, 0 to 15 or in HEX 0 to F. Two hex values represent a byte.

Jim C
+9  A: 

0xffff0000 is easy to understand that it's 16 times "1" and 16 times "0" in a 32 bit value, while 4294901760 is magic.

Michał Piaskowski
+5  A: 

Sometimes the visual representation of values in HEX makes code more readable or understandable. For instance bitmasking or use of bits becomes non-obvious when looking at decimal representations of numbers.

This can sometimes do with the amount of space a particular value type has to offer, so that may also play a role.

A typical example might be in a binary setting, so instead of using decimal to show some values, we use binary.

let's say an object had a non-exclusive set of properties that had values of either on or off (3 of them) - one way to represent the state of those properties is with 3 bits.

valid representations are 0 through 7 in decimal, but that is not so obvious. more obvious is the binary representation:

000, 001, 010, 011, 100, 101, 110, 111

Also, some people are just very comfortable with hex. Note also that hard-coded magic numbers are just that and it is not all that important no matter numbering system to use

I hope that helps.

Tim
A: 

Looking at the file, that's some pretty groady code. Hope you are good at C and not using it as a tutorial...

Hex is useful when you're directly working at the bit level or just above it. E.g, working on a driver where you're looking directly at the bits coming in from a device and twiddling the results so that someone else can read a coherent result. It's a compact fairly easy-to-read representation of binary.

Paul Nathan
There's nothing wrong with that code. Looking at that code, I couldn't find a more clear way to write it without using HEX.
Kibbee
+8  A: 

There's a direct mapping between hex (or octal for that matter) digits and the underlying bit patterns, which is not the case with decimal. A decimal '9' represents something different with respect to bit patterns depending on what column it is in and what numbers surround it - id doesn't have a direct relationship to a bit pattern. In hex, a '9' always means '1001', no matter which column. 9 = '1001', 95 = '10010101' and so forth.

As a vestage of my 8-bit days I find hex a convenient shorthand for anything binary. Bit twiddling is a dying skill. Once (about 10 years ago) I saw a third year networking paper at university where only 10% (5 out of 50 or so) of the people in the class could calculate a bit-mask.

ConcernedOfTunbridgeWells
+5  A: 

Generally the use of Hex numbers instead of Decimal it's because the computer works with bits (binary numbers) and when you're working with bits also is more understandable to use Hexadecimal numbers, because is easier going from Hex to binary that from Decimal to binary.

OxFF = 1111 1111 ( F = 1111 )

but

255 = 1111 1111

because

255 / 2 = 127 (rest 1)
127 / 2 = 63 (rest 1)
63 / 2 = 31 (rest 1)
... etc

Can you see that? It's much more simple to pass from Hex to binary.

unkiwii
+1  A: 

Would any of you guys have any hex tutorials? And not those tutorials where they show you how to convert from hex to decimal and from hex to binary. But one with examples of how to use, like some of the examples you just gave but perhaps with a bit more explanation.

Thanks

AntonioCS
Just to reiterate what everyone just said: you do not need to convert these hex numbers into decimal numbers. Ever. They're written in hex because they represent bits very nicely:0=0000 1=0001 2=0010 3=0011 4=0100 ... A=1010 B=1011 C=1100 D=1101 E=1110 F=1111 ; write your own chart and memorize it.
dlamblin
+7  A: 

I find it maddening that the C family of languages have always supported octal and hex but not binary. I have long wished that they would add direct support for binary:

int mask = 0b00001111;

Many years/jobs ago, while working on a project that involved an enormous amount of bit-level math, I got fed up and generated a header file that contained defined constants for all possible binary values up to 8 bits:

#define b0        (0x00)
#define b1        (0x01)
#define b00       (0x00)
#define b01       (0x01)
#define b10       (0x02)
#define b11       (0x03)
#define b000      (0x00)
#define b001      (0x01)
...
#define b11111110 (0xFE)
#define b11111111 (0xFF)

It has occasionally made certain bit-level code more readable.

Re: "I have long wished that they would add direct support for binary" - some compilers do implement this as an extension: I've seen it in various PIC C compilers, usually something like "0b10110110"
Andrew Medico
+5  A: 

The single biggest use of hex is probably in embedded programming. Hex numbers are used to mask off individual bits in hardware registers, or split multiple numeric values packed into a single 8, 16, or 32-bit register.

When specifying individual bit masks, a lot of people start out by:

#define bit_0 1
#define bit_1 2
#define bit_2 4
#define bit_3 8
#define bit_4 16
etc...

After a while, they advance to:

#define bit_0 0x01
#define bit_1 0x02
#define bit_2 0x04
#define bit_3 0x08
#define bit_4 0x10
etc...

Then they learn to cheat, and let the compiler generate the values as part of compile time optimization:

#define bit_0 (1<<0)
#define bit_1 (1<<1)
#define bit_2 (1<<2)
#define bit_3 (1<<3)
#define bit_4 (1<<4)
etc...
mkClark
+3  A: 

To be more precise, hex and decimal, are all NUMBERS. The radix (base 10, 16, etc) are ways to present those numbers in a manner that is either clearer, or more convenient.

When discussing "how many of something there are" we normally use decimal. When we are looking at addresses or bit patterns on computers, hex is usually preferred, because often the meaning of individual bytes might be important.

Hex, (and octal) have the property that they are powers of two, so they map groupings of bit nicely. Hex maps 4 bits to one hex nibble (0-F), so a byte is stored in two nibbles (00-FF). Octal was popular on Digital Equipment (DEC) and other older machines, but one octal digit maps to three bits, so it doesn't cross byte boundaries as nicely.

Overall, the choice of radix is a way to make your programming easier - use the one that matches the domain best.

Dan Hewett
+1  A: 

There are 8 bits in a byte. Hex, base 16, is terse. Any possible byte value is expressed using two characters from the collection 0..9, plus a,b,c,d,e,f.

Base 256 would be more terse. Every possible byte could have its own single character, but most human languages don't use 256 characters, so Hex is the winner.

To understand the importance of being terse, consider that back in the 1970's, when you wanted to examine your megabyte of memory, it was printed out in hex. The printout would use several thousand pages of big paper. Octal would have wasted even more trees.

dongilmore
+1  A: 

How make hex class in VB.NET

such as example

define bit_0 0x01

define bit_1 0x02

define bit_2 0x04

define bit_3 0x08

define bit_4 0x10

Mohd Rizal