Hi,
Need a refresher on bits/bytes, hex notation and how it relates to programming (C# preferred).
Looking for a good reading list (online preferably).
Hi,
Need a refresher on bits/bytes, hex notation and how it relates to programming (C# preferred).
Looking for a good reading list (online preferably).
Here is some basic reading: http://www.learn-c.com/data_lines.htm
Bits and bytes hardly ever relates to C# since the CLR handles memory by itself. There are classes and methods handling hex notation and all those things in the framework too. But, it is still a fun read.
Write Great Code is a good primer on this topic among others...brings you from the bare metal to higher order languages.
There are several layers to consider here:
In the electronic paradigm, everything is a wire.
A single wire represents a single bit.
0 is the LOW voltage, 1 is the
HIGH voltage. The voltages may be [0,5], [-3.3, 3], [-5, 5], [0, 1.3]
,
etc. The key thing is that there are only two voltage levels which control
the action of the transistors.
A byte is a collection of wires(To be precise, it's probably collected in a set of flip-flops called registers, but let's leave it as "wires" for now).
A bit is 0 or 1.
A byte is - in modern systems - 8 bits. Ancient systems might have had 10-bit bytes or other sizes; they don't exist today.
A nybble is 4 bits; half a byte.
Hexadecimal is an efficient representation of 8 bits. For example: F
maps to 1111 1111
. That is more efficient than writing 15. Plus, it is quite clear if you are writing down multiple byte values: FF is unambiguous; 1515 can be read several different ways.
Historically, octal has been also used(base 8). However, the only place where I have met it is in the Unix permissions.
Since on the electronic layer, it is most efficient to collect memory
in groups of 2^n, hex is a natural notation for representing
memory. Further, if you happen to work at the driver level, you may
need to specifically control a given bit, which will require the use
of bit-level operators. It is clear which bytes are on HI if you say
F & outputByte
than 15 & outputByte
.
In general, much of modern programming does not need to concern itself with binary and hexadecimal. However, if you are in a place where you need to know it, there is no slipping by - you really need to know it then.
Particular areas that need the knowledge of binary include: embedded systems, driver writing, operating system writing, network protocols, and compression algorithms.
While you wanted C#, C# is really not the right language for bit-level manipulation. Traditionally, C and C++ are the languages used for bit work. Erlang works with bit manipulation, and Perl supports it as well. VHDL is completely bit-oriented, but is fairly difficult to work with from the typical programming perspective.
Here is some sample C code for performing different logical operations:
char a, b, c;
c = a ^ b; //XOR
c = a & b; //AND
c = a | b; //OR
c = ~(a & b); //NOT AND(NAND)
c = ~a; //NOT
c = a << 2; //Left shift 2 places
c = a >> 2; //Right shift 2 places.