views:

549

answers:

10

I don't know very well about RAM and HDD architecture, or how electronics deals with chunks of memory, but this always triggered my curiosity: Why did we choose to stop at 8 bits for the smallest element in a computer value ?

My question may look very dumb, because the answer are obvious, but I'm not very sure...

Is it because 2^3 allows it to fit perfectly when addressing memory ? Are electronics especially designed to store chunk of 8 bits ? If yes, why not use wider words ? It is because it divides 32, 64 and 128, so that processor words can be be given several of those words ? Is it just convenient to have 256 value for such a tiny space ?

What do you think ?

My question is a little too metaphysical, but I want to make sure it's just an historical reason and not a technological or mathematical reason.

For the anecdote, I was also thinking about the ASCII standard, in which most of the first characters are useless with stuff like UTF-8, I'm also trying to think about some tinier and faster character encoding...

+2  A: 

Since computers work with binary numbers, all powers of two are important.

8bit numbers are able to represent 256 (2^8) distinct values, enough for all characters of English and quite a few extra ones. That made the numbers 8 and 256 quite important.
The fact that many CPUs (used to and still do) process data in 8bit helped a lot.

Other important powers of two you might have heard about are 1024 (2^10=1k) and 65536 (2^16=65k).

sbi
+9  A: 

Not all bytes are 8 bits. Some are 7, some 9, some other values entirely. The reason 8 is important is that, in most modern computers, it is the standard number of bits in a byte. As Nikola mentioned, a bit is the actual smallest unit (a single binary value, true or false).

As Will mentioned, this article http://en.wikipedia.org/wiki/Byte describes the byte and its variable-sized history in some more detail.

The general reasoning behind why 8, 256, and other numbers are important is that they are powers of 2, and computers run using a base-2 (binary) system of switches.

peachykeen
Wikipedia touches on why a byte has come to be associated with 8-bits - it hasn't always been this way, nor is it _always_ the case on every computer. http://en.wikipedia.org/wiki/Byte
Will A
Smallest element in computers architecture is a bit. Byte is allways 8 bits.
Nikola Markezic
@Nikola: As the Wikipedia article Will posted describes, bytes aren't always 8 bits. On most modern, standard PCs (and even those Macs :P), an 8-bit byte is used. Actual byte sizes have a much wider historical range, although I'd say most were between 6 and 16 bits per byte. You are right that the smallest element is a bit, though.
peachykeen
I can't even imagine anything smaller than a bit (computerrelated or not), what would that be?
Alxandr
octets are always 8 bits a bit like bytes but far more fixed in their bit-ness
Dan D
U can work with other groups of bits. from 1,2 or 5, but that was not the purpose of byte. And byte size varied, but there is a reason why other number of bits (i would like to point out not got from 2 on the power of x) but that failed because that simplification did not simplify things, but made them more complicated in later development. There is a reason why an byte is an 8 ( or as i like to put it 2^3) - and that reason is math, and compatibility of number systems
Nikola Markezic
@Alxandr - http://en.wikipedia.org/wiki/Unary_numeral_system - smaller than a bit but what to call it without resorting to swearing!
Will A
@Nikola, other bit sizes didn't fail, they just didn't do as well as 8, and died out.There's a difference between an antelope that gets eaten because it isn't as fast as its herdmates, and one that gets eaten because it tries to eat the tiger first. The latter is failing, the former is not succeeding enough. 7-bit bytes didn't succeed enough.If we were to build from scratch now, we'd probably have a larger size of 64-bits, and not have to deal with bytes and words being mismatched.
Jon Hanna
@Nikola Markezic: "Byte is allways 8 bits.", No, this is incorrect. Size of byte may change depending on chip, OS or architecture. Personally saw a programming manual for chip where on byte was 4 bits, and heard about other chips with non-8 bit bytes.
SigTerm
While 6 or 9 bit bytes arn't very common anymore, a lot of DSPs processors today gives 16 and 32 bit bytes.
nos
Don't mistake the size of the "word" in processor architecture and the size of the byte. Byte is an unit, while "word" is consistent of number of bits, that can be different from one architecture to another. While in most cases size of the "word" is a byte, or 8 bits, but "word" can be consisted of 4 bits, or 16 bits ( i worked on several different processors that didn't have the word size of 8 bits, one of them was custom design and one was Motorola but i don't remember what model), but byte is a unit- you can have a "word" consisting of 2byte = 16 bits or half a byte 4 bits-byte is a unit!
Nikola Markezic
Don't mistake not mistaking the size of an octet of the size of a byte with mistaking the size of a word and the size of a byte. In most cases today a word is not a byte. I haven't used a machine where a word was the same as a byte since the 1980s (unless you count the micro-controllers inside gadgets).
Jon Hanna
+1  A: 

I believe the main reason has to do with the original design of the IBM PC. The Intel 8080 CPU was the first precursor to the 8086 which would later be used in the IBM PC. It had 8-bit registers. Thus, a whole ecosystem of applications was developed around the 8-bit metaphor. In order to retain backward compatibility, Intel designed all subsequent architectures to retain 8-bit registers. Thus, the 8086 and all x86 CPUs after that kept their 8-bit registers for backwards compatibility, even though they added new 16-bit and 32-bit registers over the years.

The other reason I can think of is 8 bits is perfect for fitting a basic Latin character set. You cannot fit it into 4 bits, but you can in 8. Thus, you get the whole 256-value ASCII charset. It is also the smallest power of 2 for which you have enough bits into which you can fit a character set. Of course, these days most character sets are actually 16-bit wide (i.e. Unicode).

dacris
Are you sure the 8080 was used in a PC? I am pretty certain it was the 8086 (and the lower cost 8088) that were the first IBM PC CPU's.
ysap
More interesting documentation from Wikipedia:"Marketed as source compatible, the 8086 was designed so that assembly language for the 8008, 8080, or 8085 could be automatically converted into equivalent (sub-optimal) 8086 source code, with little or no hand-editing"http://en.wikipedia.org/wiki/Intel_8086
dacris
Eight bit bytes were used in the 60's. IBM System/360 was possibly the first.
ergosys
keep in mind that ASCII is a 7 bit character set, not 8 bit.
nos
And in the beginning, the internet was a 7-bit system.http://www.ietf.org/rfc/rfc2045.txt"One of the notable limitations of RFC 821/822 based mail systems isthe fact that they limit the contents of electronic mail messages torelatively short lines (e.g. 1000 characters or less [RFC-821]) of7bit US-ASCII."So we used, what was it, ROT-13 or UUE? to encode larger data values. Ah, good times.
JustBoo
@ysap, you're right, the 8086 was 16-bits in, 16-bit registers, 16-bits out. The 8088 was 8-bits in, 16-bit registers, and 8-bit out. So the 8088 took 2 clock-cycles just to load a register. It was much cheaper to manufacture, or so they claimed.
JustBoo
If I had to vote for an 8-bit-byte machine that set the path for others to follow, I'd vote for the PDP-11 (8-bit bytes, 16-bit words, byte-addressable) a good 8-years before the 8086. Not that the PDP-11 was the first with 8-bit bytes, just that it was influencial.
Jon Hanna
+1  A: 

We normally count in base 10, a single digit can have one of ten different values. Computer technology is based on switches (microscopic) which can be either on or off. If one of these represents a digit, that digit can be either 1 or 0. This is base 2.

It follows from there that computers work with numbers that are built up as a series of 2 value digits.

  • 1 digit,2 values
  • 2 digits, 4 values
  • 3 digits, 8 values etc.

When processors are designed, they have to pick a size that the processor will be optimized to work with. To the CPU, this is considered a "word". Earlier CPUs were based on word sizes of fourbits and soon after 8 bits (1 byte). Today, CPUs are mostly designed to operate on 32 bit and 64 bit words. But really, the two state "switch" are why all computer numbers tend to be powers of 2.

Arnold Spence
A: 

Historical reasons, I suppose. 8 is a power of 2, 2^2 is 4 and 2^4 = 16 is far too little for most purposes, and 16 (the next power of two) bit hardware came much later.

But the main reason, I suspect, is the fact that they had 8 bit microprocessors, then 16 bit microprocessors, whose words could very well be represented as 2 octets, and so on. You know, historical cruft and backward compability etc.

Another, similarily pragmatic reason against "scaling down": If we'd, say, use 4 bits as one word, we would basically get only half the troughtput compared with 8 bit. Aside from overflowing much faster.

You can always squeeze e.g. 2 numbers in the range 0..15 in one octet... you just have to extract them by hand. But unless you have, like, gazillions of data sets to keep in memory side-by-side, this isn't worth the effort.

delnan
+8  A: 

Historically, bytes haven't always been 8-bit in size (for that matter, computers don't have to be binary either, but that's see less action in practice). It is for this reason that IETF and ISO standards often use the term "octet" - they don't use "byte" because they don't want to assume it means 8-bits when it doesn't.

Indeed, when "byte" was coined it was defined as a 1-6 bit unit. 7, 9, 36, there's been plenty of byte sizes throughout history.

8 was a mixture of commercial success, it being a convenient enough number for the people thinking about it (which would have fed into each other) and no doubt other reasons I'm completely ignorant of.

The ASCII standard you mention assumes a 7-bit byte, and was based on earlier 6-bit communication standards.


Edit: It may be worth adding to this, as some are insisting that those saying bytes aren't always octets, are confusing bytes with words.

An octet is a name given to a unit of 8 bits (from the Latin for eight). If you are using a computer (or at a higher abstraction level, a programming language) where bytes are 8-bit, then this is easy to do, otherwise you need some conversion code (or coversion in hardware). The concept of octet comes up more in networking standards than in local computing, because in being architecture-neutral it allows for the creation of standards that can be used in communicating between machines with different byte sizes, hence its use in IETF and ISO standards (incidentally, ISO/IEC 10646 uses octet where the Unicode Standard uses byte for what is essentially - with some minor extra restrictions on the latter part - the same standard, though the Unicode Standard does detail that they mean octet by byte even though bytes may be different sizes on different machines). The concept of octet exists precisely because 8-bit bytes are common (hence the choice of using them as the basis of such standards) but not universal (hence the need for another word to avoid ambiguity).

Historically, a byte was the size used to store a character, a matter which in turn builds on practices, standards and de-facto standards which pre-date computers used for telex and other communication methods, starting perhaps with Baudot in 1870 (I don't know of any earlier, but am open to corrections).

This is reflected by the fact that in C and C++ the unit for storing a byte is called char whose size in bits is defined by CHAR_BIT in the standard limits.h header. Different machines would use 5,6,7,8,9 or more bits to define a character. These days of course we define characters as 21-bit and use different encodings to store them in 8-, 16- or 32-bit units, (and non-Unicode authorised ways like UTF-7 for other sizes) but historically that was the way it was.

In languages which aim to be more consistent across machines, rather than reflecting the machine architecture, byte tends to be fixed in the language, and these days this generally means it is defined in the language as 8-bit. Given the point in history when they were made, and that most machines now have 8-bit bytes, the distinction is largely moot, though it's not impossible to implement a compiler, run-time, etc. for such languages on machines with different sized bytes, just not as easy.

A word is the "natural" size for a given computer. This is less clearly defined, because it affects a few overlapping concerns that would generally coïncide, but might not. Most registers on a machine will be this size, but some might not. The largest address size would typically be a word, though this may not be the case (the Z80 had an 8-bit byte and a 1-byte word, but allowed some doubling of registers to give some 16-bit support including 16-bit addressing).

Again we see here a difference between C and C++ where int is defined in terms of word-size and long being defined to take advantage of a processor which has a "long word" concept should such exist, though possibly being identical in a given case to int. The minimum and maximum values are again in the limits.h header. (Indeed, as time has gone on, int may be defined as smaller than the natural word-size, as a combination of consistency with what is common elsewhere, reduction in memory usage for an array of ints, and probably other concerns I don't know of).

Java and .NET languages take the approach of defining int and long as fixed across all architecutres, and making dealing with the differences an issue for the runtime (particularly the JITter) to deal with. Notably though, even in .NET the size of a pointer (in unsafe code) will vary depending on architecture to be the underlying word size, rather than a language-imposed word size.

Hence, octet, byte and word are all very independent of each other, despite the relationship of octet == byte and word being a whole number of bytes (and a whole binary-round number like 2, 4, 8 etc.) being common today.

Jon Hanna
+2  A: 

Computers are build upon digital electronics, and digital electronics works with states. One fragment can have 2 states, 1 or 0 (if the voltage is above some level then it is 1, if not then it is zero). To represent that behavior binary system was introduced (well not introduced but widely accepted).

So we come to the bit. Bit is the smallest fragment in binary system. It can take only 2 states, 1 or 0, and it represents the atomic fragment of the whole system.

To make our lives easy the byte (8 bits) was introduced. To give u some analogy we don't express weight in grams, but that is the base measure of weight, but we use kilograms, because it is easier to use and to understand the use. One kilogram is the 1000 grams, and that can be expressed as 10 on the power of 3. So when we go back to the binary system and we use the same power we get 8 ( 2 on the power of 3 is 8). That was done because the use of only bits was overly complicated in every day computing.

That held on, so further in the future when we realized that 8 bytes was again too small and becoming complicated to use we added +1 on the power ( 2 on the power of 4 is 16), and then again 2^5 is 32, and so on and the 256 is just 2 on the power of 8.

So your answer is we follow the binary system because of architecture of computers, and we go up in the value of the power to represent get some values that we can simply handle every day, and that is how you got from a bit to an byte (8 bits) and so on!

(2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and so on) (2^x, x=1,2,3,4,5,6,7,8,9,10 and so on)

Nikola Markezic
+3  A: 

ASCII encoding required 7 bits, and EBCDIC required 8 bits. Extended ASCII codes (such as ANSI character sets) used the 8th bit to expand the character set with graphics, accented characters and other symbols.Some architectures made use of proprietary encodings; a good example of this is the DEC PDP-10, which had a 36 bit machine word. Some operating sytems on this architecture used packed encodings that stored 6 characters in a machine word for various purposes such as file names.

By the 1970s, the success of the D.G. Nova and DEC PDP-11, which were 16 bit architectures and IBM mainframes with 32 bit machine words was pushing the industry towards an 8 bit character by default. The 8 bit microprocessors of the late 1970s were developed in this environment and this became a de facto standard, particularly as off-the shelf peripheral ships such as UARTs, ROM chips and FDC chips were being built as 8 bit devices.

By the latter part of the 1970s the industry settled on 8 bits as a de facto standard and architectures such as the PDP-8 with its 12 bit machine word became somewhat marginalised (although the PDP-8 ISA and derivatives still appear in embedded sytem products). 16 and 32 bit microprocessor designs such as the Intel 80x86 and MC68K families followed.

ConcernedOfTunbridgeWells
+1  A: 

The important number here is binary 0 or 1. All your other questions are related to this.

Claude Shannon and George Boole did the fundamental work on what we now call information theory and Boolean arithmetic. In short, this is the basis of how a digital switch, with only the ability to represent 0 OFF and 1 ON can represent more complex information, such as numbers, logic and a jpg photo. Binary is the basis of computers as we know them currently, but other number base computers or analog computers are completely possible.

In human decimal arithmetic, the powers of ten have significance. 10, 100, 1000, 10,000 each seem important and useful. Once you have a computer based on binary, there are powers of 2, likewise, that become important. 2^8 = 256 is enough for an alphabet, punctuation and control characters. (More importantly, 2^7 is enough for an alphabet, punctuation and control characters and 2^8 is enough room for those ASCII characters and a check bit.)

drewk
+1  A: 

Charles Petzold wrote an interesting book called Code that covers exactly this question. See chapter 15, Bytes and Hex.

Quotes from that chapter:

Eight bit values are inputs to the adders, latches and data selectors, and also outputs from these units. Eight-bit values are also defined by switches and displayed by lightbulbs, The data path in these circuits is thus said to be 8 bits wide. But why 8 bits? Why not 6 or 7 or 9 or 10?

... there's really no reason why it had to be built that way. Eight bits just seemed at the time to be a convenient amount, a nice biteful of bits, if you will.

...For a while, a byte meant simply the number of bits in a particular data path. But by the mid-1960s. in connection with the development of IBM's System/360 (their large complex of business computers), the word came to mean a group of 8 bits.

... One reason IBM gravitated toward 8-bit bytes was the ease in storing numbers in a format known as BCD. But as we'll see in the chapters ahead, quite by coincidence a byte is ideal for storing text because most written languages around the world (with the exception of the ideographs used in Chinese, Japanese and Korean) can be represented with fewer than 256 characters.

Ash