views:

122

answers:

2

I have the following code:

{-# NOINLINE i2i #-}
i2i :: Int -> Integer
i2i x = toInteger x

main = print $ i2i 2

Running GHC with -ddump-simpl flag gives:

[Arity 1
 NoCafRefs
 Str: DmdType U(L)]
Main.i2i = GHC.Real.toInteger1

Seems that conversion from Int to Integer is lazy. Why is it so - is there a case when I can have

(toInteger _|_ ::Int) /= _|_

?

Edit: the question has more to do with GHC strictness analyzer, than with laziness per se. This code was derived from exploring standard mean function:

--mean :: Integer -> Integer -> [Integer] -> Double
mean :: Integer -> Int -> [Integer] -> Double
mean acc n [] = fromIntegral acc / fromIntegral n
mean acc n (x:xs) = mean (acc + x) (n + 1) xs

main = print $ mean 0 0 [1..1000000]

This code runs on O(N) space. When I uncomment first line, space consumption changes to O(1). Seems that it comes down to fromIntegral call, which in turn comes down to toInteger. Strictness analyzer somehow cannot infer that conversion is strict, which seems strange to me.

+1  A: 

I think you're looking at this the wrong way. Consider the following, silly fragment of code

let x = [undefined]
let y = map toInteger x

If we evaluate

 y == []

we get False, whereas if we evaluate

 head y

we get an undefined exception. There's no reason that applying map or comparing y with [] should diverge just because the only element of x is undefined. That's the essence of non-strictness.

Chris Conway
Please see edit - I understand strictness in your example, but it's not strictness per se but how GHC infers strictness for optimization.
+8  A: 

Response to your edit: the dangers of O(N) space leaks for accumulating parameters are well known, at least to Haskell programmers. What ought to be well known but isn't is that no matter what the language, you should never trust to the optimizer to provide asymptotic guarantees for the space and time behavior of your programs. I don't understand the implications of simple optimizers I've written myself, let alone something hairy like GHC's front end, what with a strictness analyzer, inliner, and all the rest.

As to your two questions,

Why doesn't GHC's strictness analyzer optimize this particular code, when it does optimize very similar code?

Who knows?? (Maybe Simon PJ knows, maybe not.) If you care about performance, you shouldn't be relying on the strictness analyzer. Which brings us to the second, implied question:

How can I avoid O(N) space costs on this function and on every other function that uses accumulating parameters?

By putting strictness annotations on the accumluating parameters that force them to be evaluated at each tail-recursive call.

Norman Ramsey