views:

617

answers:

11

Is it worth it to implement it in hardware? If yes why? If not why not?


Sorry I thought it is clear that I am talking about Decimal Rational Numbers! Ok something like decNumber++ for C++, decimal for .NET... Hope it is clear now :)

+2  A: 

I speculate that there are no compute-intensive applications of decimal numbers. On the other hand, floating points number are extensively used in engineering applications, which must handle enormous amounts of data and do not need exact results, just need to stay within a desired precision.

Roberto Bonvallet
Also extensively used in graphics, a GPU's efficiency comes from doing massive amounts of floating point operations that cover most of what it needs to do.
Nick Craver
Agreed. For most scientific purposes, the error in your calculation and/or observation methodology is several orders of magnitude greater than the error introduced by floating-point rounding. Real number-crunching performs best when it leverages the strengths of the underlying platform. For binary computers, computation using binary numbers is more efficient.
Daniel Pryden
A: 

Do you mean the typical numeric integral types "int", "long", "short" (etc.)? Because operations on those types are definitely implemented in hardware. If you're talking about arbitrary-precision large numbers ("BigNums" and "Decimals" and such), it's probably a combination of rarity of operations using these data types and the complexity of building hardware to deal with arbitrarily large data formats.

Mike Daniels
I am talking about Decimal Rational Numbers. Like Decimal in .NET world.
AraK
+2  A: 

No, they are very memory-inefficient. And the calculations are also on hardware not easy to implement (of course it can be done, but it also can use a lot of time). Another disadvantage of the decimal format is, it's not widly used, before research showed that the binary-formatted numbers were more accurate the format was popular for a time. But now programmers know better. The decimal format is't efficient and is more lossy. Also additional hardware-representations require additional instruction-sets, that can lead to more difficult code.

CommuSoft
+1 for "The decimal format isn't efficient and is more lossy". Most people think decimals are more precise, but they aren't. IMHO, Microsoft has only made this worse with the .NET System.Decimal type, since it has 128 bits to work with. *Of course* a 128-bit number will be more precise than a 64-bit number. But a 128-bit binary float would be *even more* precise than a 128-bit decimal.
Daniel Pryden
"Lossy" is an ambiguous term, so I'd not use it without specifying what you mean, exactly. What usually matters is that when user inputs `1.1` and `2.2` into your application, and ask to add them, the output is `3.3` - and not `3.29...`. In that sense, `decimal` is less lossy, and it is precisely this niche it is intended for. This goes for granted for any calculations involving money - never, ever use `float` or `double` for money! - but it equally applies to any case where you deal with decimal user input.
Pavel Minaev
Because humans use the decimal-standard as their own, it "seems" to look that the decimal-type is less lossy (perhaps indeed a bad word-choise). But if you add 1/3 with let's say 1/7 you will notice that de double type will be more accurate (i haven't checked it, but I'm pretty sure in most of such cases, the result is more accurate)
CommuSoft
+1  A: 

Decimals (and more generally, fractions) are relatively easy to implement as a pair of integers. General purpose libraries are ubiquitous and easily fast enough for most applications.

Anyone who needs the ultimate in speed is going to hand-tune their implementation (eg changing the divisor to suit a particular usage, algebraicly combining/reordering the operations, clever use of SIMD shuffles...). Merely encoding the most common functions into a hardware ISA would surely never satisfy them -- in all likelihood, it wouldn't help at all.

Richard Berg
+1  A: 

The simple answer is that computers are binary machines. They don't have ten fingers, they have two. So building hardware for binary numbers is considerably faster, easier, and more efficient than building hardware for decimal numbers.

By the way: decimal and binary are number bases, while fixed-point and floating-point are mechanisms for approximating rational numbers. The two are completely orthogonal: you can have floating-point decimal numbers (.NET's System.Decimal is implemented this way) and fixed-point binary numbers (normal integers are just a special case of this).

Daniel Pryden
I see your point that Floating-Point Numbers are more efficient and that is true, but they have different usage as I understand.
AraK
@AraK: "Different usage" how? The only thing that you can't do with binary numbers is perform math the way a banker would, where there is an arbitrary distinction between 0.01 having significance and 0.009 not being significant. And I would say it you're probably better off using fixed point in such a case anyway.
Daniel Pryden
Humans deal with decimal numbers. Consequently, it's much easier to explain to a person why `1/3` will render as `1.33...`, than it is to explain to the same person why `1.3` quietly becomes `1.29999`.
Pavel Minaev
On the whole though this is (surprisingly) the most precise answer so far. All modern computer architectures are binary, therefore it's easier to work with binary numbers, whether integers or floating-point. Duh.
Pavel Minaev
@Pavel Minaev: You're completely correct. However, whether that matters depends on your application. Most of the time I'm doing number crunching, it's for scientific applications where users are quite accustomed to seeing 1.2999 (or, even better: 1.3 ± 0.001).
Daniel Pryden
Yep, it definitely depends on the task at hand. For similar reasons games (and, in general, any layout/rendering code - e.g. WPF or GDI+) uses floating-point, and often single-precision one at that.
Pavel Minaev
A: 

Floating point math essentially IS an attempt to implement decimals in hardware. It's troublesome, which is why the Decimal types are created partly in software. It's a good question, why CPUs don't support more types, but I suppose it goes back to CISC vs. RISC processors -- RISC won the performance battle, so they try to keep things simple these days I guess.

Lee B
`decimal` is itself floating-point, it's just decimal floating-point, while `float` and `double` are binary floating-point.
Pavel Minaev
Generally speaking, perhaps. But when most people (including myself) talk of floating point in computers, they're talking about the IEEE standard floating point specification, as implemented in modern processors, not simply a number with a point in it, that has a variable number of significant digits.
Lee B
FYI, decimal floating-point is also an IEEE standard (IEEE 754-2008).
Pavel Minaev
A: 

Modern computers are usually general purpose. Floating point arithmetic is very general purpose, while Decimal has a far more specific purpose. I think that's part of the reason.

Joren
+3  A: 

There is (a tiny bit of) decimal string acceleration, but...

This is a good question. My first reaction was "macro ops have always failed to prove out", but after thinking about it, what you are talking about would go a whole lot faster if implemented in a functional unit. I guess it comes down to whether those operations are done enough to matter. There is a rather sorry history of macro op and application-specific special-purpose instructions, and in particular the older attempts at decimal financial formats are just legacy baggage now. For example, I doubt if they are used much, but every PC has the Intel BCD opcodes, which consist of

DAA, AAA, AAD, AAM, DAS, AAS

Once upon a time, decimal string instructions were common on high-end hardware. It's not clear that they ever made much of a benchmark difference. Programs spend a lot of time testing and branching and moving things and calculating addresses. It normally doesn't make sense to put macro-operations into the instruction set architecture, because overall things seem to go faster if you give the CPU the smallest number of fundamental things to do, so it can put all its resources into doing them as fast as possible.

These days, not even all the binary ops are actually in the real ISA. The cpu translates the legacy ISA into micro-ops at runtime. It's all part of going fast by specializing in core operations. For now the left-over transisters seem to be waiting for some graphics and 3D work, i.e., MMX, SSE, 3DNow!

I suppose it's possible that a clean-sheet design might do something radical and unify the current (HW) scientific and (SW) decimal floating point formats, but don't hold your breath.

DigitalRoss
Very good point, though those codes aren't actually used by any floating-point decimal arithmetic implementation that I know of.
Pavel Minaev
True, but BCD strings aren't the way modern decimal types are implemented. For example, the .NET `System.Decimal` is a floating-point decimal structure with an exponent and mantissa, instead of a BCD string, which is usually implemented as fixed-point.
Daniel Pryden
Right, they are basically the same as extended floats, except that the exponent is 10^e rather than 2^e. I suppose I could improve the answer a little.
DigitalRoss
Pavel Minaev
The reason the BCD operations aren't used by most floating point decimal libraries is that there is no way to access them from C, in particular - you have to drop into assembler to access the instructions. People avoid coding in assembler with good reason. Those involved in business-level calculations, in particular, want to avoid doing assembler work, not least because their code must be portable across as many platforms as possible.
Jonathan Leffler
+6  A: 

The latest revision of the IEEE 754:2008 standard does indeed define hardware decimal floating point numbers, using the representations shown in the software referenced in the question. The previous version of the standard (IEEE 754:1985) did not provide decimal floating point numbers. Most current hardware implements the 1985 standard and not the 2008 standard, but IBM's iSeries computers using Power6 chips have such support, and so do the z10 mainframes.

The standardization effort for decimal floating point was spearheaded by Mike Cowlishaw of IBM UK, who has a web site full of useful information (including the software in the question). It is likely that in due course, other hardware manufacturers will also introduce decimal floating point units on their chips, but I have not heard a statement of direction for when (or whether) Intel might add one. Intel does have optimized software libraries for it.

The C standards committee is looking to add support for decimal floating point and that work is TR 24732.

Jonathan Leffler
+1 I think this answers my question the most. Especially the mention of the new standard, Which means we could see these chips soon after standardization like what happened with Binary Floating-Point numbers.
AraK
This is interesting. I wasn't aware of decimal floating point in IEEE 754:2008. However, the point still stands that decimal isn't inherently any better than binary floating point except in certain edge cases, so even when we get FPU's with built-in decimal floating point, you will still need to evaluate whether decimal or binary is better for your application. (I would expect that even with hardware support, binary floating point will likely still perform faster, although by a much smaller margin.)
Daniel Pryden
Decimal arithmetic is more easily predictable and benefits those applications where working with decimal data is a benefit. A primary beneficiary is accounting applications - unless you are the US Federal Government, you need to keep tabs on your spending accurately, and you run into far fewer edge cases if you use decimal numbers. (The 128-bit floating point decimal type can support even projected US budget deficits with accuracy - down to the fictitious penny if need so be.)
Jonathan Leffler
One of the reasons single and double-precision floating point numbers will stay more efficient is... That they're single or double-precision. If you want to compare their efficiency (and memory footprint) you'd need to compare decimals to 128-bit (quadruple-precision, I guess) floating point numbers - but if you did use FP numbers you'd probably only need single or double precision. So what I'm saying is that 128-bit decimal numbers, even with hardware acceleration, will probably still be slower than their binary floating-point alternative.
configurator
+2  A: 

The hardware you want used to be fairly common.

Older CPU's had hardware BCD (Binaray coded decimal) arithmetic. ( The little Intel chips had a little support as noted by earlier posters)

Hardware BCD was very good at speeding up FORTRAN which used 80 bit BCD for numbers.

Scientific computing used to make up a significant percentage of the worldwide market.

Since everyone (relatively speaking) got home PC running windows, the market got tiny as a percentage. So nobody does it anymore.

Since you don't mind having 64bit doubles (binary floating point) for most things, it mostly works.

If you use 128bit binary floating point on modern hardware vector units it's not too bad. Still less accurate than 80bit BCD, but you get that.

At an earlier job, a colleague formerly from JPL was astonished we still used FORTRAN. "We've converted to C and C++ he told us." I asked him how they solved the problem of lack of precision. They'd not noticed. (They have also not the same space probe landing accuracy they used to have. But anyone can miss a planet.)

So, basically 128bit doubles in the vector unit are more okay, and widely available.

My twenty cents. Please don't represent it as a floating point number :)

Tim Williscroft
+2  A: 

Some IBM processors have dedicated decimal hardware included (Decimal Floating Point | DFP- unit).

In contribution of answered Sep 18 at 23:43 Daniel Pryden

the main reason is that DFP-units need more transistors in a chip then BFP-units. The reason is the BCD Code to calculate decimal numbers in a binary environment. The IEEE754-2008 has several methods to minimize the overload. It seems that the DPD hxxp://en.wikipedia.org/wiki/Densely_packed_decimal method is more effective in comparison to the BID hxxp://en.wikipedia.org/wiki/Binary_Integer_Decimal method.

Normally, you need 4 bits to cover the decimal range from 0 to 9. Bit the 10 to 15 are invalid but still possible with BCD. Therefore, the DPD compress 3*4=12 bit into 10 bit to cover the range from 000 to 999 with 1024 (10^2)possibilities.

In general it is to say, that BFP is faster then DFP. And BFP need less space on a chip then DFP.

The question why IBM implemented a DFP unit is quite simple answered: They build servers for the finance market. If data represents money, it should be reliable.

With hardware accelerated decimal arithmetic, some errors do not accour as in binary. 1/5 = 0.2 => 0.0110011001100110011001100110... in binary so recurrent fractions could be avoided.

And the overhelming round() function in excel would be useless anymore :D (->function =1*(0,5-0,4-0,1) wtf!)

hope that explain your question a little!

Charakterlos