Fundamentally, trying to store a 64-bit value in a 32-bit value will result in a loss of precision, and the compiler is right to warn you about the issue. What you really have to ask is "do I really need all that precision?"
If you don't need the precision, then "float output = Convert.ToSingle(inputAsDouble);" will do the trick - it will just round off to the nearest representable single precision value.
If you do need the precision, but still need the value to fit in 32-bits, then you have to constrain the range somehow. For example if you know that your value is always going to be in the range -1e-319 to 1e-319, then you can use fixed point mathematics to convert between the stored 32-bit value and the double value you need to use for calculations. The double value thus returned won't be able to represent all possible numeric values in your range, but you will have 32-bit granularity inside that limited range, which is really quite a decent accuracy.
For example, you can make a helper class like:
struct FixedDouble
{
int storage;
public FixedDouble(double input)
{
storage = DoubleToStorage(input);
}
public double AsDouble
{
get
{
return StorageToDouble(storage);
}
}
const double RANGE = 1e-319;
public static int DoubleToStorage(double input)
{
Debug.Assert(input <= RANGE);
Debug.Assert(input >= -RANGE);
double rescaledValue = (input / RANGE) * int.MaxValue;
return (int)rescaledValue;
}
public static double StorageToDouble(int input)
{
double rescaledValue = ((double)input / (double)int.MaxValue) * RANGE;
return rescaledValue;
}
}
This code probably won't work as is because I've just knocked it out quickly, but the idea is there - basically you sacrifice the full range that the double offers you, and instead choose a fixed granularity between two numbers, and the 32-bit value allows you to define a point on the number-line between those two numbers.