views:

220

answers:

2

I am having some speed issues with my C# program and identified that this percentage calculation is causing a slow down. The calculation is simply n/d * 100. Both the numerator and denominator can be any integer number. The numerator can never be greater than the denominator and is never negative. Therefore, the result is always from 0-100. Right now, this is done by simply using floating point math and is somewhat slow, since it's being calculated tens of millions of times. I really don't need anything more accurate than to the nearest 0.1 percent. And, I just use this calculated value to see if it's bigger than a fixed constant value. I am thinking that everything should be kept as an integer, so the range with 0.1 accuracy would be 0-1000. Is there some way to calculate this percentage without floating point math?

Here is the loop that I am using with calculation:

for (int i = 0; i < simulationList.Count; i++)
{
    for (int j = i + 1; j < simulationList.Count; j++)
    {
        int matches = GetMatchCount(simulationList[i], simulationList[j]);
        if ((float)matches / (float)simulationList[j].Catchments.Count > thresPercent)
        {
            simulationList[j].IsOverThreshold = true;
        }
    }
}
+6  A: 

Instead of n/d > c, you can use n > d * c (supposing that d > 0).
(c is the constant value you are comparing to.)

This way you don't need division at all.

However, watch out for the overflows.

Vlad
This is good, since it even gets rid of division completely, which is usually more expensive than multiplication.
Matti Virkkunen
Nowadays, division and multiplication cost the same on modenr CPUs.
Loadmaster
@Loadmaster: isn't integer multiplication cheaper than conversion from ints to floats and floating point division?
Vlad
@Loadmaster: Seems I'm a bit out of date then, since I always thought integer multiplication was less expensive than integer division. @Vlad: It is, however you can do it with integer division too by doing (1000 * n) / d
Matti Virkkunen
@Matti: optimizing compilers replace integer division by a constant with integer multiplication and a bit shift, hence no difference in cost. I'm pretty sure division by a variable is still several times slower than multiplication.
Ben Voigt
@Matti, @Loadmaster: at least this article suggests that the integer division is still considerably slower than multiplication: http://www.xzing.org/?itemid=339
Vlad
And this processor timing table suggests that the integer is slower at modern processors: http://www.agner.org/optimize/instruction_tables.pdf
Vlad
(in my previous comment, read "integer" as "integer division")
Vlad
A: 

If your units are in tenths instead of ones, then you can get your 0.1 accuracy using integer arithmetic:

Instead of:

for (...)
{
    float n = ...;
    float d = ...;

    if (n / d > 1.4) // greater than 140% ?

...do something like:

for (...)
{
    int n = 10 * ...;
    int d = ...;
    if (n / d > 14) // greater than 140% ?
Scott Smith
For speed, use `double` instead of `float`.
Henk Holterman
... or even int :-)
Vlad