views:

203

answers:

1

As Moggi proposed 20 years ago, the effectful function space -> of languages like ML can be decomposed into the standard total function space => plus a strong monad T to capture effects.

A -> B decomposes to A => (T B)

Now, Haskell supports monads, including an IO monad that appears sufficient for the effects in ML, and it has a function space that contains => (but also includes partial functions). So, we should be able to translate a considerable fragment of ML into Haskell via this decomposition. In theory I think this works.

My question is whether an embedding like this can be practical: is it possible to design a Haskell library that allows programming in Haskell in a style not too far from ML? And if so how will the performance be?

My criteria for "practical" is that existing ML code with extensive use of effects could be relatively easily transcribed into Haskell via the embedding, including complicated cases involving higher-order functions.

To make this concrete, my own attempt at such a transcription via the embedding is below. The main function is a transcription of some simple ML code that imperatively generates 5 distinct variable names. Rather than use the decomposition directly, my version lifts functions so that they evaluate their arguments - the definitions prior to main are a mini-library including lifted primitives. This works okay, but some aspects aren't totally satisfactory.

  1. There's a little too much syntactic noise for the injection of values into computations via val. Having unlifted versions of functions (like rdV) would help this, at the cost of requiring these to be defined.
  2. Non-value definitions like varNum require monadic binding via <- in a do. This then forces any definitions that depend on them to also be in the same do expression.
  3. It seems then that the whole program might end up being in one huge do expression. This is how ML programs are often considered, but in Haskell it's not quite as well supported - e.g., you're forced to use case instead of equations.
  4. I guess there will be some laziness despite threading the IO monad throughout. Given that the ML program would be designed for strict evaluation, the laziness should probably be removed. I'm uncertain what the best way to do this is though.

So, any advice on improving this, or on better approaches using the same decomposition, or even quite different ways of achieving the same broad goal of programming in Haskell using a style that mirrors ML? (It's not that I dislike the style of Haskell, it's just that I'd like to be able to map existing ML code easily.)

import Data.IORef
import Control.Monad

val :: Monad m => a -> m a
val = return

ref = join . liftM newIORef
rdV = readIORef                                    -- Unlifted, hence takes a value
(!=) r x =  do { rr <- r; xx <- x; writeIORef rr xx  }

(.+),(.-) :: IO Int -> IO Int -> IO Int
( (.+),(.-) ) = ( liftM2(+), liftM2(-) )

(.:) :: IO a -> IO [a] -> IO [a]
(.:) = liftM2(:)
showIO :: Show a => IO a -> IO String
showIO = liftM show

main = do 
    varNum <- ref (val 0)
    let newVar = (=<<) $ \() -> val varNum != (rdV varNum .+ val 1) >> 
                                val 'v' .: (showIO (rdV varNum))
    let gen = (=<<) $ \n -> case n of 0 -> return []
                                      nn -> (newVar $ val ()) .: (gen (val n .- val 1))
    gen (val 5)
+2  A: 

Here's a possible way, by sigfpe. It doesn't cover lambdas, but it seems it can be extended to them.

sdcvvc
A couple interesting techniques that I may try to use. It's a similar idea as in the code I gave (not surprising since Andrzej Filinski and I studied together at CMU). Also, I'd classify that embedding under "works in theory" rather than "works in practice". Plus, I guess I'd like to avoid templates for now and see what nice embeddings are possible without hacking the syntax.
RD1