views:

171

answers:

5

Floating point type represents a number by storing its significant digits and its exponent separately on separate binary words so it fits in 16, 32, 64 or 128 bits.

Fixed point type stores numbers with 2 words, one representing the integer part, another representing the part past the radix, in negative exponents, 2^-1, 2^-2, 2^-3, etc.

Float are better because they have wider range in an exponent sense, but not if one wants to store number with more precision for a certain range, for example only using integer from -16 to 16, thus using more bits to hold digits past the radix.

In terms of performances, which one has the best performance, or are there cases where some is faster than the other ?

In video game programming, does everybody use floating point because the FPU makes it faster, or because the performance drop is just negligible, or do they make their own fixed type ?

Why isn't there any fixed type in C/C++ ?

+4  A: 

That definition covers a very limited subset of fixed point implementations.

It would be more correct to say that in fixed point only the mantissa is stored and the exponent is a constant determined a-priori. There is no requirement for the binary point to fall inside the mantissa, and definitely no requirement that it fall on a word boundary. For example, all of the following are "fixed point":

  • 64 bit mantissa, scaled by 2-32 (this fits the definition listed in the question)
  • 64 bit mantissa, scaled by 2-33 (now the integer and fractional parts cannot be separated by an octet boundary)
  • 32 bit mantissa, scaled by 24 (now there is no fractional part)
  • 32 bit mantissa, scaled by 2-40 (now there is no integer part)

GPUs tend to use fixed point with no integer part (typically 32-bit mantissa scaled by 2-32). Therefore APIs such as OpenGL and Direct3D often use floating-point types which are capable of holding these values. However, manipulating the integer mantissa is often more efficient so these APIs allow specifying coordinates (in texture space, color space, etc) this way as well.

As for your claim that C++ doesn't have a fixed point type, I disagree. All integer types in C++ are fixed point types. The exponent is often assumed to be zero, but this isn't required and I have quite a bit of fixed-point DSP code implemented in C++ this way.

Ben Voigt
Wait, aren't most GPUs floating-point nowadays?
Oli Charlesworth
The difference between an integer and a true fixed-point type is that when doing fixed-point using integers you have to shift the result after a multiple or divide operation. A built-in fixed type would do that under the hood. On fixed-point DSP hardware, this is directly supported by the instruction set, but other wise if makes fixed-point * and / operations slower than the corresponding plain integer operatons.
Clifford
@Oli: Since DirectX9 they are; it is part of the DirectX9 specification.
Clifford
Well, you CAN shift the result after each multiply or divide, but that isn't necessarily a good idea. It's often desirable simply to let the (hidden) exponent of the result be different from the arguments, and do the shift at the very end. Sometimes the shift may never be necessary (if for example you simply want to compare the result to a constant, you can change the exponent of the constant to match without needing a runtime shift). Furthermore, shifting by certain intervals is free on many architectures, by storing e.g. AH or AL registers to memory instead of EAX.
Ben Voigt
There's another pretty popular fixed-point implementation: infinite-digit decimal mantissa, scaled by 10^-2. Otherwise known as *money*. Obviously not relevant to game programming (well, not to the graphics part anyways, but actually pretty important for MMORPGs), but one of the most widely used fixed-point datatypes.
Jörg W Mittag
@Jörg: Actually, at least for the `euro` currency, it was specified that operations should be conducted with 5 digits (to avoid too much rounding errors) and then rounded to 2 digits using mathematical rounding at the very end. Furthermore a number of currencies do not have any digit at all (for example, the yen, which subdivision has not been used since 1954).
Matthieu M.
@Jorg: Good point, although it seems like computer implementations of the money data type often use 64-bit mantissa scaled by 10<sup>-4</sup>. The infinite digit representation actually might be more useful for some games, 64 bits is enough for the real world for the forseeable future.
Ben Voigt
+1  A: 

The diferrence between floating point and integer math depends on the CPU you have in mind. On Intel chips the difference is not big in clockticks. Int math is still faster because there are multiple integer ALU's that can work in parallel. Compilers are also smart to use special adress calculation instructions to optimize add/multiply in a single instruction. Conversion counts as an operation too, so just choose your type and stick with it.

In C++ you can build your own type for fixed point math. You just define as struct with one int and override the appropriate overloads, and make them do what they normally do plus a shift to put the comma back to the right position.

jdv
Actually, integer math isn't fastest. Spreading the work across integer ALU, FPU, and SIMD unit is fastest, but obviously much more complex.
Ben Voigt
@Ben: True, but this is beyond the scope of this question.
jdv
+2  A: 

Fixed point is widely used in DSP and embedded-systems where often the target processor has no FPU, and fixed point can be implemented reasonably efficiently using an integer ALU.

In terms of performance, that is likley to vary depending on the target architecture and application. Obviously if there is no FPU, then fixed point will be considerably faster. When you have an FPU it will depend on the application too. For example performing some functions such as sqrt() or log() will be much faster when directly supported in the instruction set rather thna implemented algorithmically.

There is no built-in fixed point type in C or C++ I imagine because they (or at least C) were envisaged as systems level languages and the need fixed point is somewhat domain specific, and also perhaps because on a general purpose processor there is typically no direct hardware support for fixed point.

In C++ defining a fixed-point data type class with suitable operator overloads and associated math functions can easily overcome this shortcomming. However there are good and bad solutions to this problem. A good example can be found here: http://www.drdobbs.com/cpp/207000448. The link to the code in that article is broken, but I tracked it down to ftp://66.77.27.238/sourcecode/ddj/2008/0804.zip

Clifford
+1  A: 

You need to be careful when discussing "precision" in this context.

For the same number of bits in representation the maximum fixed point value has more significant bits than any floating point value (because the floating point format has to give some bits away to the exponent), but the minimum fixed point value has fewer than any non-denormalized floating point value (because the fixed point value wastes most of its mantissa in leading zeros).

Also depending on the way you divide the fixed point number up, the floating point value may be able to represent smaller numbers meaning that it has a more precise representation of "tiny but non-zero".

And so on.

dmckee
+2  A: 

At the code level, fixed-point arithmetic is simply integer arithmetic with an implied denominator.

For many simple arithmetic operations, fixed-point and integer operations are essentially the same. However, there are some operations which the intermediate values must be represented with a higher number of bits and then rounded off. For example, to multiply two 16-bit fixed-point numbers, the result must be temporarily stored in 32-bit before renormalizing (or saturating) back to 16-bit fixed-point.

When the software does not take advantage of vectorization (such as CPU-based SIMD or GPGPU), integer and fixed-point arithmeric is faster than FPU. When vectorization is used, the efficiency of vectorization matters a lot more, such that the performance differences between fixed-point and floating-point is moot.

Some architectures provide hardware implementations for certain math functions, such as sin, cos, atan, sqrt, for floating-point types only. Some architectures do not provide any hardware implementation at all. In both cases, specialized math software libraries may provide those functions by using only integer or fixed-point arithmetic. Often, such libraries will provide multiple level of precisions, for example, answers which are only accurate up to N-bits of precision, which is less than the full precision of the representation. The limited-precision versions may be faster than the highest-precision version.

rwong