float f = 5.13;
double d = 5.13;
float fp = f - (float)Math.floor(f);
double dp = d - Math.floor(d);
Isn't there any faster way than calling an external function every time?
float f = 5.13;
double d = 5.13;
float fp = f - (float)Math.floor(f);
double dp = d - Math.floor(d);
Isn't there any faster way than calling an external function every time?
"External function"?
System.Math is built into mscorlib!
This is actually the fastest way to do this.
It is static so this should be really fast, there is no object to stand up. You can always to bit level math, but unless you have some serious use the function. Likely the floor() method is already doing this, but you could inline it and cut out checks etc if you need something really fast, but in C# this is not your greatest performance issue.
Well, I doubt you'll get any real world performance gain, but according to Reflector Math.Floor is this:
public static decimal Floor(decimal d)
{
return decimal.Floor(d);
}
So arguably
double dp = d - decimal.Floor(d);
may be quicker. (Compiler optimisations make the whole point moot I know...)
For those who may be interested to take this to its logical conclusion decimal.Floor is:
public static decimal Floor(decimal d)
{
decimal result = 0M;
FCallFloor(ref result, d);
return result;
}
with FCallFloor being a invoke to unmanaged code, so you are pretty much at the limit of the "optimisation" there.
You could cast f to an int which would trim the fractional part. This presumes that your doubles fall within the range of an integer.
Of course, the jitter may be smart enough to optimize Math.floor to some inlined asm that'll do the floor, which may be faster than the cast to int then cast back to float.
Have you actually measured and verified that the performance of Math.floor is affecting your program? If you haven't, you shouldn't bother with this level of micro-optimization until you know that is a problem, and then measure the performance of this alternative against the original code.
EDIT: This does appear faster. The following code takes 717ms when using Math.Floor(), and 172 ms for the int casting code on my machine, in release mode. But again, I doubt the perf improvement really matters - to get this to be measurable I had to do 100m iterations. Also, I find Math.Floor() to be much more readable and obvious what the intent is, and a future CLR could emit more optimal code for Math.Floor and beat out this approach easily.
private static double Floor1Test()
{
// Keep track of results in total so ops aren't optimized away.
double total = 0;
for (int i = 0; i < 100000000; i++)
{
float f = 5.13f;
double d = 5.13;
float fp = f - (float)Math.Floor(f);
double dp = d - (float)Math.Floor(d);
total = fp + dp;
}
return total;
}
private static double Floor2Test()
{
// Keep track of total so ops aren't optimized away.
double total = 0;
for (int i = 0; i < 100000000; i++)
{
float f = 5.13f;
double d = 5.13;
float fp = f - (int)(f);
double dp = d - (int)(d);
total = fp + dp;
}
return total;
}
static void Main(string[] args)
{
System.Diagnostics.Stopwatch timer = new System.Diagnostics.Stopwatch();
// Unused run first, guarantee code is JIT'd.
timer.Start();
Floor1Test();
Floor2Test();
timer.Stop();
timer.Reset();
timer.Start();
Floor1Test();
timer.Stop();
long floor1time = timer.ElapsedMilliseconds;
timer.Reset();
timer.Start();
Floor2Test();
timer.Stop();
long floor2time = timer.ElapsedMilliseconds;
Console.WriteLine("Floor 1 - {0} ms", floor1time);
Console.WriteLine("Floor 2 - {0} ms", floor2time);
}
}
Donald E. Knuth said:
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
So unless you have benchmarked your application and found positive evidence that this operations is the bottleneck, then don't bother optimizing these this line of code.
In the case of Decimal
, I would recommend ignoring everyone yelling not to change it and try using Decimal.Truncate
. Whether it is faster or not, it is a function specifically intended for what you are trying to do and thus is a bit clearer.
Oh, and by the way, it is faster:
System.Diagnostics.Stopwatch foo = new System.Diagnostics.Stopwatch();
Decimal x = 1.5M;
Decimal y = 1;
int tests = 1000000;
foo.Start();
for (int z = 0; z < tests; ++z)
{
y = x - Decimal.Truncate(x);
}
foo.Stop();
Console.WriteLine(foo.ElapsedMilliseconds);
foo.Reset();
foo.Start();
for (int z = 0; z < tests; ++z)
{
y = x - Math.Floor(x);
}
foo.Stop();
Console.WriteLine(foo.ElapsedMilliseconds);
Console.ReadKey();
//Output: 123
//Output: 164
Edit: Fixed my explanation and code.