views:

296

answers:

6

Can you help me clarify the usages of the float primitive in Java?

My understanding is that converting a float value to double and vice-versa can be problematic. I read (rather long time ago and not sure that it's actual true anymore with new JVMs) that float's performance is much worse than double's. And of course floats have less precision than double.

I also remember that when I worked with AWT and Swing I had some problems with using float or double (like using Point2D.Float or Point2D.Double).

So, I see only 2 advantages of float over double:

  1. It needs only 4 bytes while double needs 8 bytes

  2. The Java Memory Model (JMM) guarantees that assignment operation is atomic with float variables while it's not atomic with double's.

Are there any other cases where float is better then double? Do you use float in your applications?

+1  A: 

Those two reasons you just gave are huge.

If you have a 3D volume that's 1k by 1k by 64, and then have many timepoints of that data, and then want to make a movie of maximum intensity projections, the fact that float is half the size of double could be the difference between finishing quickly and thrashing because you ran out of memory.

Atomicity is also huge, from a threading standpoint.

There's always going to be a tradeoff between speed/performance and accuracy. If you have a number that's smaller than 2^31 and an integer, then an integer is always a better representation of that number than a float, just because of the precision loss. You'll have to evaluate your needs and use the appropriate types for your problems.

mmr
A: 

So yes, advantages of floats:

  1. Only requires 4 bytes
  2. Atomic assignment
  3. Arithmetic should be faster, especially on 32bit architectures, since there are specific float byte codes.

Ways to mitigate these when using doubles:

  1. Buy more RAM, it's really cheap.
  2. Use volatile doubles if you need atomic assignment.
  3. Do tests, verify the performance of each, if one really is faster there isn't a lot you can do about it.

Someone else mentioned that this is similar to the short vs int argument, but it is not. All integer types (including boolean) except long are stored as 4 byte integers in the Java memory model unless stored in an array.

Geoff
As fields, integral types all take 4-bytes, but not when stored as arrays. E.g. an short[] takes 2 bytes per index.
mdma
Yes, quite correct. (Except long, which is actually stored in two slots).
Geoff
+1  A: 

Look here for details: http://people.uncw.edu/tompkinsj/133/numbers/Reals.htm I would say the other important difference is in real numbers they can represent.

Gabriel Ščerbák
A: 

I think you nailed it when you mention storage, with floats being half the size.

Using floats may show improved performance over doubles for applications processing large arrays of floating point numbers such that memory bandwith is the limiting factor. By switching to float[] from double[] and halving the data size, you effectively double the throughput, because twice as many values can be fetched in a given time. Although the cpu has a little more work to do converting the float to a double, this happens in parallel with the memory fetch, with the fetch taking longer.

For some applications the loss of precision might be worth trading for the gain in performance. Then again... :-)

mdma
+2  A: 

The reason for including the float type is to some extent historic: it represents a standard IEEE floating point representation from the days when shaving 4 bytes off the size of a floating point number in return for extremely poor precision was a tradeoff worth making.

Nowadays, uses for float are pretty limited. But, for example, having the data type can make it easier to write code that needs interoperability with older systems that do use float.

As far as performance is concerned, I think the float and double are essentially identical except for the performance of divisions. Generally, whichever you use, the processor converts to its internal format, does the calculation, then converts back again, and the actual calculation effectively takes a fixed time. In the case of divisions, on Intel processors at least, as I recall the time taken to do a division is generally one clock cycle per 2 bits of precision, so that whether you use float or double does make a difference.

Unless you really really have a strong reason for using it, in new code, I would generally avoid 'float'.

Neil Coffey
It's worth noting that on Intel CPUs, the internal representation is 80 bits, which is different from *both* 32-bit `float` and 64-bit `double`. So a conversion happens in either case.
Greg Hewgill
Yes, absolutely-- sorry I thought that was clear when I said "whichever you use [...] converts to its internal format". But yes, completely agree.
Neil Coffey
And the existence of MP3 is to some extent historic: it comes from the days when shaving 50-90% off the size of your audio files in return for reduced precision was a tradeoff worth making. **Seriously!!** Stop thinking 4 bytes and start thinking 50%. If you have thousands of hours of floating point audio, 32 bits is a plenty precision, and doubling the storage size for extra precision would be useless. Think arrays not single variables!
R..
@r You may have a point for some applications, but just doubling your memory/storage capacity is often quite viable. After all, you're only talking about halving the data size, not making it 10%...
Neil Coffey
A: 

It is true that doubles might in some cases have faster operations than floats. However, this requires that everything fits in the L1-cache. With floats you can have twice as much in a cache-line. This can make some programs run almost twice as fast.

SSE instructions can also work with 4 floats in parallel instead of 2, but I doubt that the JIT actually uses those. I might be wrong though.

Jørgen Fogh