As Moggi proposed 20 years ago, the effectful function space ->
of languages like ML can be decomposed into the standard total function space =>
plus a strong monad T
to capture effects.
A -> B
decomposes to A => (T B)
Now, Haskell supports monads, including an IO monad that appears sufficient for the effects in ML, and it has a function space that contains => (but also includes partial functions). So, we should be able to translate a considerable fragment of ML into Haskell via this decomposition. In theory I think this works.
My question is whether an embedding like this can be practical: is it possible to design a Haskell library that allows programming in Haskell in a style not too far from ML? And if so how will the performance be?
My criteria for "practical" is that existing ML code with extensive use of effects could be relatively easily transcribed into Haskell via the embedding, including complicated cases involving higher-order functions.
To make this concrete, my own attempt at such a transcription via the embedding is below. The main function is a transcription of some simple ML code that imperatively generates 5 distinct variable names. Rather than use the decomposition directly, my version lifts functions so that they evaluate their arguments - the definitions prior to main
are a mini-library including lifted primitives. This works okay, but some aspects aren't totally satisfactory.
- There's a little too much syntactic noise for the injection of values into computations via
val
. Having unlifted versions of functions (likerdV
) would help this, at the cost of requiring these to be defined. - Non-value definitions like
varNum
require monadic binding via<-
in ado
. This then forces any definitions that depend on them to also be in the samedo
expression. - It seems then that the whole program might end up being in one huge
do
expression. This is how ML programs are often considered, but in Haskell it's not quite as well supported - e.g., you're forced to usecase
instead of equations. - I guess there will be some laziness despite threading the IO monad throughout. Given that the ML program would be designed for strict evaluation, the laziness should probably be removed. I'm uncertain what the best way to do this is though.
So, any advice on improving this, or on better approaches using the same decomposition, or even quite different ways of achieving the same broad goal of programming in Haskell using a style that mirrors ML? (It's not that I dislike the style of Haskell, it's just that I'd like to be able to map existing ML code easily.)
import Data.IORef
import Control.Monad
val :: Monad m => a -> m a
val = return
ref = join . liftM newIORef
rdV = readIORef -- Unlifted, hence takes a value
(!=) r x = do { rr <- r; xx <- x; writeIORef rr xx }
(.+),(.-) :: IO Int -> IO Int -> IO Int
( (.+),(.-) ) = ( liftM2(+), liftM2(-) )
(.:) :: IO a -> IO [a] -> IO [a]
(.:) = liftM2(:)
showIO :: Show a => IO a -> IO String
showIO = liftM show
main = do
varNum <- ref (val 0)
let newVar = (=<<) $ \() -> val varNum != (rdV varNum .+ val 1) >>
val 'v' .: (showIO (rdV varNum))
let gen = (=<<) $ \n -> case n of 0 -> return []
nn -> (newVar $ val ()) .: (gen (val n .- val 1))
gen (val 5)