tags:

views:

116

answers:

1

Hello

What is wrong here, is Lazy Evaluation too?

teste.hs

module Main where  

import Control.Parallel(par,pseq)  
import Text.Printf  
import Control.Exception  
import System.CPUTime  
import Data.List  
import IO  
import Data.Char  
import Control.DeepSeq  

--Calcula o tempo entre o inicio e o fim de rodagem do programa  
time :: IO t -> IO t  
time a = do  
    start <- getCPUTime  
    v <- a
    end   <- getCPUTime  
    let diff = (fromIntegral (end - start)) / (10^12)  
    printf "Computation time: %0.3f sec\n" (diff :: Double)  
    return v  

learquivo :: FilePath -> IO ([[Int]])  
learquivo s = do   
            conteudo <- readFile s   
            return (read conteudo)   

main :: IO ()  
main = do   
    conteudo <- learquivo "mkList1.txt"   
    mapasort <- return (map sort conteudo)
    time $ mapasort  `seq` return ()  

*Main> main
Computation time: 0.125 sec

mkList1.txt is a list of 100 lists of 100 random numbers in each, more or less like this: [[23,45,89,78,89 ...], [4783, 44, 34 ...]...]

I did a test printing mapasort:

  • time $ print ("Sort usando map = ", mapasort)

And the computing time increased considerably so I think something is wrong.

Computation time: 1.188 sec

Thanks

+4  A: 

Yes, this is due to Haskell's laziness. You're trying to get around the laziness by using seq, but since seq is "shallow" (i.e. it doesn't traverse the whole structure of the expression - only the "outer" layer), it will force the evaluation of the map, but not the evaluation of the sorts.

To fix this either use deepseq instead of seq or, even better, use a library for benchmarking instead of using getCPUTime.

sepp2k
sepp2k, is there any library of benchmarking, which calculates the time equal to getCPUTime? Criteriom tried to install but could not she do it?
Gmp