views:

397

answers:

6

There is the whole new paradigm of "functional programming", which needs a total change of thought patterns compared to procedural programming. It uses higher order functions, purity, monads, etc., which we don't usually see in imperative and object oriented languages.

My question is how the implementation of these languages differs from imperative or object oriented languages, with respect to, for example, memory management or internals like pointers etc..

There are functional languages that run on top of the JVM. Does this mean that these languages internally work like the other languages on the JVM?

A: 

Everything runs on the same processor (and thus the same assembly instructions), so as long as you go deep enough, everything's the same internally.

Amber
I'm no expert in this area, but the fact that everything runs on the same assembly instructions (x86/x86_64) seems to be more a result of the dominance of the x86 architecture. There were, back in the day, Lisp machines that had processors designed specifically for Lisp. http://en.wikipedia.org/wiki/Lisp_machine
John Paulett
Sure. But I was more referring to the fact that assuming you're running all types of languages on the same platform, at some level or another they're all going to boil down to the same thing. (Functional, procedural, logic-based, etc.)
Amber
This makes no sense. A Lisp compiler is differently implemented from a C compiler, uses different instruction sequences and compiles to different instruction sequences.That all vehicles drive on the same road does not make all cars look or work the same. Some have two wheels, some have four wheels, etc. etc.
Rainer Joswig
But all of them eventually translate to forces exerting themselves between the car and the road. Again, the point was not that the languages -are- the same, but simply that *if you dig deep enough*, languages are all interfaces to resources that are independent of the language.
Amber
outis
@Dav: in particular, Dinesh is asking about implementing compilers/interpreters for functional languages.
outis
My answer was mostly addressing this sentence in the OP: *"there is the function languages which runs on top of JVM does this means the internal working of these languages are the same as other languages?"* I was providing an example to display why that logic isn't valid.
Amber
Joe Internet
+1  A: 

I guess there are many aspects that benefit from special attention in functional languages, one that comes to mind is:

Functional languages use recursion a lot. So any implementation should try to optimize this case. E.g. identify tail-recursion and transform into a loop internally (thus saving function call overheads like stack save / restore). (http://en.wikipedia.org/wiki/Tail%5Frecursion)

Carsten
yes obviously there are advantages and some disadvantages of the FP but the question is about their internal workings.
Legend
+5  A: 

Implementations of Functional Programming languages are using a wide range of implementation techniques. An excellent introduction into the implementation of Scheme (a Lisp dialect) gives this book: Lisp in Small Pieces by Christian Queinnec.

Rainer Joswig
+1  A: 

The implementation of a functional programming language such as Haskell are often very different than those of imperative languages. You can read about one way of doing it here. Even though the paper is several years old I believe the ideas are still used.

Amuck
While that book is a bit old it is still a very good read. For me it was a real AHA experience into the beauty of implementing functional languages.
rvirding
+6  A: 

Code resulting from functional languages uses many features you see to varying degrees in non-functional languages. Garbage collection has passed into general usage. Tail-call optimization is done in GCC and VC++.

Closures, however, are a hallmark of functional programming. You don't see one without the other. If you define "functional languages" to refer only to pure functional languages, the two aren't synonymous as you find closures in imperative languages that support functional programming (e.g. Javascript and Scheme (which is technically imperative, though the functional paradigm is what's mostly used)). Closures might be implemented with a spaghetti stack for the call stack, or by copying out local variables when exiting a stack frame, or by allocating local variables on the heap and letting garbage collection take care of them.

Once you have closures, anonymous functions are relatively easy (with an interpreter, they're really easy). With a compiler, the function is converted to bytecode at compile time, and the bytecode (rather, the address of the entry point) is associated at runtime with the current environment.

Function composition can rely on anonymous function. When a compiler encounters a function composition operator f . g, it creates an anonymous function that calls the two arguments f and g, passing the result of one as the argument to the other.

Monads can be implemented in OO languages, they're just not as necessary as they are in pure functional languages. I/O monads aren't anything too special, they just rely on the fact that he underlying platform allows side effects.

outis
+1  A: 

The biggest difference that comes to mind is that functional languages tend to be designed so that the source code is desugared to a mathematically simple and powerful intermediate language. This language usually contains lambda, function calls, if/else, machine types, something like let, and not a whole lot more. The transformed code is deeply nested, verbose, and not realistically human-readable. The surface syntax is thrown away.

A compiler for a language like this has to do some inlining and a few closure optimizations to produce decent code. (To me these baseline closure optimizations seem nontrivial--escape analysis and so forth--but it might just be lack of familiarity.)

Jason Orendorff