tags:

views:

285

answers:

6

Hello, I know BCD is like more intuitive datatype if you dont know binary. But I dont know why to use this encoding, its like dont makes a lot os sense since its waste representacion in 4bits (when representacion is bigger than 9).

Also I think x86 anly supports adds and subs directly (you can convert them via FPU).

Its possible that this comes from old machines, or other architectures?

Hope this doesnt sound too newbie :P

Thanks!

+5  A: 

BCD arithmetic is useful for exact decimal calculations, which is often a requirement for financial applications, accountancy, etc. It also makes things like multiplying/dividing by powers of 10 easier. These days there are better alternatives.

There's a good Wikipedia article which discusses the pro and cons.

Paul R
"better alternatives"? I would build a C++ `BigDecimal` type on the hardware's BCD -- it would sure be fast if you did it that way. I'm not sure what would be "better" than using the hardware datatype.
S.Lott
I doubt modern x86 CPU's have optimized BCD implementations - they are probably implemented as microcode with a focus on compatibility, not performance.
Michael
IBM has hardware support for DECFLOAT in its POWER 6 CPUs.
Paul R
+2  A: 

BCD is space-wise wasteful, that's true, but it has the advantage of being a "fixed pitch" format, making it easy to find the nth digit in a particular number.

Another advantage is that is allows for exact arithmetic calculations on arbitrary size numbers. Also, using the "fixed pitch" characteristics mentioned, such arithmetic operations can easily be chunked into multiple threads (parallel processing).

mjv
+2  A: 

BCD exists in modern x86 CPU's since it was in the original 8086 processor, and all x86 CPU's are 8086 compatible. BCD operations in x86 were used to support business applications way back when. BCD support in the processor itself isn't really used anymore.

Note that BCD is an exact representation of decimal numbers, which floating point is not, and that implementing BCD in hardware is far simpler that implementing floating point. These sort of things mattered more back when processors had less than a million transistors at ran at a few megahertz.

Michael
@Michael: I don't recall the x86 instructions for BCD. Can you remind me, please?
John Saunders
@John, I can think of DAA, DAS (Decimal Adjust[after] Addition / Subtraction). There may be a few others, been a while I didn't play with that ;-)
mjv
@mjv: Thanks. I had totally forgotten about those. I barely remember even having seen an example of using those - and that wasn't a real-world example.
John Saunders
+1  A: 

BCD is useful at the very low end of the electronics spectrum, when the value in a register is displayed by some output device. For example, say you have a calculator with a number of seven-segment displays that show a number. It is convenient if each display is controlled by separate bits.

It may seem implausible that a modern x86 processor would be used in a device with these kinds of displays, but x86 goes back a long way, and the ISA maintains a great deal of backward compatibility.

Jay Conrod
+1  A: 

I'm sure the Wiki article linked to earlier goes into more detail, but I used BCD on IBM mainframe programming (in PL/I). BCD not only guaranteed that you could look at particular areas of a byte to find an individual digit - which is useful sometimes - but also allowed the hardware to apply simple rules to calculate the required precision and scale for e.g. adding or multiplying two numbers together.

As I recall, I was told that on mainframes, support for BCD was implemented in hardware and at that time, was our only option for representing floating point numbers. (We're talking 18 years go here!)

Nij
+1  A: 

When I was in college over 30 years ago, I was told the reasons why BCD (COMP-3 in COBOL) was a good format.

None of those reasons are still relevant with modern hardware. We have fast, binary fixed point arithmetic. We no longer need to be able to convert BCD to a displayable format by adding an offset to each BCD digit. We rarely store numbers as eight bits per digit, so the fact that BCD only takes four bits per digit isn't very interesting.

BCD is a relic, and should be left in the past, where it belongs.

John Saunders