views:

116

answers:

4

I'm using both Ogre and NxOgre, which both have a Real typedef that is either float or double depending on a compiler flag. This has resulted in most of our compiler warnings now being:

warning C4305: 'argument' : truncation from 'double' to 'Ogre::Real'

When initialising variables with 0.1 for example. Normally I would use 0.1f but then if you change the compiler flag to double precision then you would get the reverse warning. I guess it's probably best to pick one and stick with it but I'd like to write these in a way that would work for either configuration if possible.

One fix would be to use #pragma warning (disable : 4305) in files where it occurs, I don't know if there are any other more complex problems that can be hidden by not having this warning. I understand I would push and pop these in header files too so that they don't end up spreading across code.

Another is to create some macro based on the accuracy compiler flag like:

#if OGRE_DOUBLE_PRECISION
    #define INIT_REAL(x) (x)
#else
    #define INIT_REAL(x) static_cast<float>( x )
#endif

which would require changing all the variable initialisation done so far but at least it would be future proof.

Any preferences or something I haven't thought of?

+1  A: 

Clearly, if the type you use is platform specific, so should the literals be. The same issues appear when using TCHARs, CStrings, LPCTSTRs etc..., only worse: char cannot be converted to wchar_t.

So my preference would definitely go to the initialization macro. Working away warnings is dangerous.

You can even specify your macro to append the "float" literal tag:

//#define DOUBLEPREC

#ifndef DOUBLEPREC
typedef float REAL;
#define INIT_REAL(r) (r##f)
#else
typedef double REAL;
#define INIT_REAL(r) r
#endif

REAL r = INIT_REAL(.1);
double d = INIT_REAL(.1);
float f = INIT_REAL(.1);
xtofl
@xtofl - great tip with ##f, that could be exactly what's needed, I'm not really well versed in the extra stuff you can do in macros with #. Plus this way won't compile if you put something other than a literal in the macro rather than just cast something to float that you didn't mean to.
identitycrisisuk
Anyone want to share why this was downvoted? I know macros are generally evil but this seemed like an ok usage (although it would break if you didn't have a decimal point in your number...)
identitycrisisuk
Actually using a `typedef` iso the 'REAL' macro would be better. I admit, I overlooked that.
xtofl
A: 

As to me, I would stick to one of the types and forget about this warning. Anyway, double/single precision has minimal impact on fps (in my projects it was < 3%).

Also to mention (as far as I know), DirectX and OpenGL work in single precision.

If you really want to get rid of this warning the proper way, you could use the #if stuff, but not the way you did. The better approach is to make something like

#if OGRE_DOUBLE_PRECISION == 1
    // Note that 
    typedef double Real;
#else
    typedef float Real;
#endif

You could also write your code using Ogre::Real as the type (or typedefing it so it would be comfortable for you).

Kotti
@Kotti - That exact definition exists in Ogre already, it's more a question of when you have Ogre::Real variables how do you set them using 0.1 or 0.1f.
identitycrisisuk
+2  A: 

The simple solution would be to just add a cast:

static_cast<Ogre::Real>(0.1);

or you could write a function to do the conversion for you (similar to your macro, but avoiding all the yucky problems macros bring:

template <typename T>
inline Ogre::Real real(T val) { return static_cast<Ogre::Real>(val); } 

Then you can just call real(0.1) and get the value as an Ogre::Real.

jalf
Picked this as the accepted answer since it would keep double accuracy. I will probably define most literals as floats though if I am sure they don't need double accuracy.
identitycrisisuk
+1  A: 

Always initialise using floating point literals. Every floating point value has a double value that represents the exact same decimal number (though this may not be the best double precision approximation to the decimal value the float is intended to represent).

Visual Studio seems to perform the expansion at compile time, even in debug builds, though the results aren't exactly inspiring: passing 1.1f into a function that takes a double passes 1.1000000238418579, but passing 1.1 in passes 1.1000000000000001. (Both values according to the watch window.)

(No such problems with numbers that have an exact representation, of course, e.g. 1.25f.)

Whether this is a big deal or not, I couldn't say, but if your game runs OK with floats then it is already tolerating a similar level of inaccuracy.

brone
@brone - I investigated the value of numbers assigned to doubles as well, came to a similar conclusion. I think for graphical stuff float will be fine but I've got a feeling that for physics we might need double for moving very small distances at a high frame rate. Whether that will make any difference to initial assignments I don't know, I can always use the template function idea for numbers I think may need more accuracy.
identitycrisisuk
Most values will be the results of computations and in double mode will therefore have got the benefit of the extra precision during the calculations. In any event, if you NEED doubles, you'll just use the double type rather than Ogre::Real, and the whole issue becomes irrelevant...
brone