views:

3675

answers:

9

For those of you experienced in both Haskell and some flavor of Lisp, I'm curious how "pleasant" (to use a horrid term) it is to write code in Haskell vs. Lisp.

Some background: I'm learning Haskell now, having earlier worked with Scheme and CL (and a little foray into Clojure). Traditionally, you could consider me a fan of dynamic languages for the succinctness and rapidity they provide. I quickly fell in love with Lisp macros, as it gave me yet another way to avoid verbosity and boilerplate.

I'm finding Haskell incredibly interesting, as it's introducing me to ways of coding I didn't know existed. It definitely has some aspects that seem like they would aid in achieving agility, like ease of writing partial functions. However, I'm a bit concerned about losing Lisp macros (I assume I lose them; truth be told I may have just not learned about them yet?) and the static typing system.

Would anyone who has done a decent amount of coding in both worlds mind commenting on how the experiences differ, which you prefer, and if said preference is situational?

+20  A: 

First of all, don't worry about losing particular features like dynamic typing. As you're familiar with Common Lisp, a remarkably well-designed language, I assume you're aware that a language can't be reduced to its feature set. It's all about a coherent whole, isn't it?

In this regard, Haskell shines just as brightly as Common Lisp does. Its features combine to provide you with a way of programming that makes code extremely short and elegant. The lack of macros is mitigated somewhat by more elaborate (but, likewise, harder to understand and use) concepts like monads and arrows. The static type system adds to your power rather than getting in your way as it does in most object-oriented languages.

On the other hand, programming in Haskell is much less interactive than Lisp, and the tremendous amount of reflection present in languages like Lisp just doesn't fit the static view of the world that Haskell presupposes. The tool sets available to you are therefore quite different between the two languages, but hard to compare to one another.

I personally prefer the Lisp way of programming in general, as I feel it fits the way I work better. However, this doesn't mean you're bound to do so as well.

Matthias Benkard
+21  A: 

Short answer:

  • almost anything you can do with macros you can do with a higher-order function (and I include monads, arrows, etc.), but it might require more thinking (but only the first time, and it's fun and you'll be a better programmer for it), and
  • the static system is sufficiently general that it never gets in your way, and somewhat surprisingly it actually "aids in achieving agility" (as you said) because when your program compiles you can be almost certain that is correct, so this certainty lets you try out things you might be otherwise afraid to try -- there is a "dynamic" feel to programming although it's not the same as with Lisp.

[Note: There is a "Template Haskell" that lets you write macros just as in Lisp, but strictly speaking you should never need it.]

ShreevatsaR
+2  A: 

As I continue my Haskell-learning journey, it seems that one thing that helps "replace" macros is the ability to define your own infix operators and customize their precedence and associativity. Kinda complicated, but an interesting system!

J Cooper
+5  A: 

There's less need for metaprogramming in Haskell than in Common Lisp because much can be structured around monads and the added syntax makes embedded DSLs look less tree-like, but there's always Template Haskell, as mentioned by ShreevatsaR, and even Liskell (Haskell semantics + Lisp syntax) if you like the parentheses.

Herrmann
+3  A: 

In Haskell you can define an if function, which is impossible in LISP. This is possible because of laziness, which allows for more modularity in programs. This classic paper: Why FP matters by John Hughes, explains how laziness enhances composability.

luntain
Scheme (one of the two major LISP dialects) actually does have lazy evaluation, though it's not default as in Haskell.
crimson13
+4  A: 

I'm a Common Lisp programmer.

Having tried Haskell some time ago my personal bottom line was to stick with CL.

Reasons:

  • dynamic typing (check out http://prog.vub.ac.be/~wdmeuter/PostJava04/papers/Costanza.pdf)
  • optional and keyword arguments
  • uniform homoiconic list syntax with macros
  • prefix syntax (no need to remember precedence rules)
  • impure and thus more suited for quick prototyping
  • powerful object system with meta-object protocol
  • mature standard
  • wide range of compilers

Haskell does have its own merits of course and does some things in a fundamentally different way, but it just doesn't cut it in the long term for me.

skypher
Hey do you happen to have the title of that Costanza paper you linked to? Looks like that file was moved.
spacemanaki
It might have been this one: http://p-cos.net/documents/dynatype.pdf
skypher
@skypher Thanks a lot!
spacemanaki
A: 

I'm struggling to understand something - other than the type system of Haskell is there anything else that can't be implemented with a proper set of macros over Lisp?

Practically anything can be implemented — but it isn't necessarily practical.
Chuck
The type system of Haskell is a pretty big "other" :)
J Cooper
of course you can implement Haskell in Lisp and then write your programs in this Haskell you implemented. but wouldn't you then be programming in Haskell? why trouble yourself and just use Haskell to begin with?
yairchu
A: 

There are really cool things that you can achieve in Lisp with macros that are cumbersome (if possible) in Haskell. Take for example the `memoize' macro (see Chapter 9 of Peter Norvig's PAIP). With it, you can define a function, say foo, and then simply evaluate (memoize 'foo), which replaces foo's global definition with a memoized version. Can you achieve the same effect in Haskell with higher-order functions?

Not quite (AFAIK), but you can do something similar by modifying the function (assuming it's recursive) to take the function to call recursively as a parameter(!) rather than simply calling itself by name: http://www.haskell.org/haskellwiki/Memoization
j_random_hacker
+2  A: 

Hello,

Concerning macros, here is a page which talk about it : Hello Haskell, Goodbye Lisp. It explains a point of view where macros as just not needed in Haskell. It comes with a short example for comparison.

Example case where a LISP macro is required to avoid evaluation of both arguments :

(defmacro doif (x y) `(if ,x ,y))

Example case where Haskell does not systematically evaluates both argument, without the need of anything like a macro definition :

doif x y = if x then (Just y) else Nothing

And voilà

Hibou57
That's a common misconception. Yes, in Haskell laziness means that you don't need macros when you want to avoid evaluating some parts of an expression, but those are only the most trivial subset of all macro uses. Google for "The Swine Before Perl" for a talk demonstrating a macro that cannot be done with laziness. Also, if you *do* want some bit to be strict, then you can't do that as a function -- mirroring the fact that Scheme's `delay` cannot be a function.
Eli Barzilay
@Eli Barzilay: I don't find this example very convincing. Here's a complete, simple Haskell translation of slide 40: http://pastebin.com/8rFYwTrE
Reid Barton
@Reid Barton: Huh? The main point of that paper is creating a *macro* which is in fact a small DSL for specifying automatons that are getting "compiled" to Scheme code. Your code, OTOH, is a kind of a simple translation of the code -- but (a) it uses the table lookup that Shriram talks about in the beginning, and much more importantly, (b) you're using plain Haskell, and the result is still not close to defining such a DSL. AFAICT, the only thing that this demonstrates is "it's easy to write such code, easier when you can use function values in a table". I.e, not much related to macros.
Eli Barzilay
@Eli Barzilay: I don't understand your response at all. `accept` *is* the (E)DSL. The `accept` function is the analogue of the macro outlined on the previous pages, and the definition of `v` is exactly parallel to the definition of `v` in Scheme on slide 40. The Haskell and Scheme functions compute the same thing with the same evaluation strategy. At best, the macro allows you to expose more of the structure of your program to the optimizer. You can hardly claim this as an example where macros increase the expressive power of the language in a way not replicated by lazy evaluation.
Reid Barton
I'm not following any of this. First of all, yes -- `accept` is the function that does the work, but it's not a DSL, it's a function like all other functions -- and things like using it in all sub-lists, or the required use of `where` with its own scope is exactly what the macro makes unnecessary. As for lazy evaluation -- you're not using it in any significant way so I don't see how is this whole argument relevant.
Eli Barzilay
@Eli Barzilay: In a hypothetical lazy Scheme, you could write this: http://pastebin.com/TN3F8VVE My general claim is that this macro buys you very little: slightly different syntax and an easier time for the optimizer (but it wouldn't matter to a "sufficiently smart compiler"). In exchange, you have trapped yourself into an inexpressive language; how do you define an automaton that matches any letter without listing them all? Also, I don't know what you mean by "using it in all sub-lists" or "the required use of where with its own scope".
Reid Barton
Reid: (a) a lazy scheme is not hypothetical -- one has been part of Racket for several years; (b) the fact that macros are still useful there is a good hint; (c) what you wrote is also showing why a macro is useful -- it *doesn't* use one and is therefore not the DSL that Shriram is talking about; (d) by "using it in sublists" etc I meant that you have certain requirements on your "DSL" that come from the implementation (eg, the use of `accept`) -- that's one reason why it's *not* a DSL;
Eli Barzilay
(e) The illusion that laziness makes flow-control macros (ones that have no new bindings) redundant can be seen as bogus if you think about adding a strict operator to a lazy language -- using such an operator requires special forms (and macros) too; (f) another point: if macros are not needed in Haskell, how come it does have them? (And even before TH, there were uses of CPP.)
Eli Barzilay
Finally, (g) sure you trap yourself in an "inexpressive language" -- the whole point is a *DSL* -- not a GPL. Obviously, it's possible to write a more sophisticated macro that will have Scheme expressions (like a predicate for a symbol instead of listing them all), but that goes beyond the DSL in this example.
Eli Barzilay
OK, I give up. Apparently your definition of DSL is "the arguments to a macro" and so my lazy Scheme example is not a DSL, despite being syntactically isomorphic to the original (`automaton` becoming `letrec`, `:` becoming `accept`, `->` becoming nothing in this version). Whatever.
Reid Barton