tags:

views:

266

answers:

4
(n >= 3 ) && (n <= 99)

OR

 n `elem` [3..99]

Which one is faster and why?

+12  A: 

The first one is faster

 (n >= 3) && (n <= 99)

it is doing 3 operations

 n >= 3
 n <= 99
 and

Where as the elem is looking up the item in the array, so is doing upto (99 - 3) * 2 operations.

index = 0
isFound = false
array[] = { 3, 4, 5, 6, ... 98, 99 }

while isFound == false
   isFound = (n == array[index++])
Dead account
Is this how `elem` is actually implemented? Is there really no black magic?? at all?? :)
TheMachineCharmer
Dead account
I'm guessing that you could get GHC to optimize this particular case by using a rewrite rule a la `"elem/enumFromTo" forall x lo hi. elem x (enumFromTo lo hi) = (x >= lo -), so take this with a big pinch of salt!
yatima2975
So I conclude that there is NO black magic! ;)) +1A I like answers with code.
TheMachineCharmer
TheMachineCharmer
I mean `(n - start )/ step` is integer and is less than `(end-start)/step`
TheMachineCharmer
Well, `elem` has no way of knowing what list it's used on: in particular, the information that it comes from an expression like `[3..99]` is gone at runtime. Think about how to compute `elem n (3 : 4 : 50000 : [6..99]` and you'll see that elem has to go through all the elements.As for the question in your first comment: see the source of Data.List : http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-List.html#elem for the definition (it doesn't involve arrays)
yatima2975
+11  A: 

(n >= 3) && (n <= 99) is faster as it involves only two trivial comparisons. If the compiler/interpreter does not do any kind of real black magic optimization it has to construct the list ([3..99]) because lazy evaluation cannot be used (normally "pulling" the next value until you're done, which would have a complexity of O(n/2) in this case).

Florian Thiel
+1 Welcome to stackoverflow :)
Dead account
+1 So there is no black magic though I was expecting some :) also welcome to SO!
TheMachineCharmer
+7  A: 

These two expressions don't mean the same thing. A subtle difference is that one relies on Ord and the other on Enum:

> :t \n -> (n >= 3) && (n <= 99)
\n -> (n >= 3) && (n <= 99) :: (Num a, Ord a) => a -> Bool

> :t \n -> n `elem` [3..99]
\n -> n `elem` [3..99] :: (Num a, Enum a) => a -> Bool

So, for example, if n is 3.14159, then the first test will pass, but the second won't:

> (pi >= 3) && (pi <= 99)
True

> pi `elem` [3..99]
False

Further, while the four Prelude Num instances (Int, Integer, Float, and Double) are all instances of both Ord and Enum, it is possible to imagine a numeric type that is an instance of Ord but not Enum. In such a case, then the second test wouldn't even be legal.

Hence, in general, the compiler can't optomize the second to be as fast as the first unless it knows for a given type, that it is Ord and that all ordered values in the range are also in the list enumeration created by enumFromTo. For Float and Double this isn't true, and for Int and Integer there is no way for the compiler to derive it, the compiler and library programmers would have to hand code it and ensure it held in all cases.

MtnViewMark
A: 

Depends on the machine and compiler (or interpreter).

trinithis