views:

163

answers:

4

I'm reading 'Functional Programming' by Tomas Petricek & Jon Skeet and I understand the difference between declarative & imperative programming.

What I was wondering is how are the primitive operators & functions implemented, are declarative languages constructed from imperative operators & functions.

Cheers

AWC

+5  A: 

If I understand your question correctly, I don't think that is a hard and fast rule. For example, you can use a functional language like Lisp, to create an interpreter for itself. In this case, the implementation details are implemented in a functional manner (because Lisp is a functional language).

Also, if you have a language that is Turing Complete, you can use it to implement a parser/interpreter/compiler for any other language. There are imperative Turing-Complete languages, and functional/declarative Turing-Complete languages.

But all code eventually comes done to assembly or machine code, which is inherently imperative. In theory, what I said above is true, but apparently not in practice :).

As an interesting historical aside, LISP was a completely theoretical construct; it was a mathematical notation for computer languages. It remained theoretical until LISP's eval function was implemented in machine code by Steve Russel on an IBM 704:

According to what reported by Paul Graham in Hackers & Painters, p. 185, McCarthy said: "Steve Russell said, look, why don't I program this eval..., and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bug , and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..." (emphasis mine)

So once again, the subtleties between theory and practice. :)

Vivin Paliath
You are correct, this is the meaning of the question.
AWC
But the Lisp interpreter is then implemented in terms of imperative code. At some point, unless you're running on a Lisp machine, stuff has to get compiled down to assembly.
dsimcha
I alluded to that at the end of my answer. If you think about it *theoretically* then it makes sense. But in practice, not so. For example I could implement a LISP parser in Perl and a Perl parser in LISP. If you look at it on paper, in the first case you're seeing a functional language being implemented in an imperative manner, whereas in the second case, you're seeing an imperative language being implemented in a functional manner. However, when you put it into practice, it's going to be obviously imperative, because assembly language is imperative.
Vivin Paliath
+3  A: 

The low level machine (CPU, assembly language level) is imperative so obviously at some point the implementation will have to take that into account. However, Implementing certain kinds of functional languages like Haskell takes some very non-obvious approaches to create a run-time with decent performance.

Strangely enough, most imperative languages go through a phase where all the code is transformed to be more declarative:

Here is an example of compiling Scheme (functional) directly to C code (imperative):

Jared Updike
+2  A: 

Your question is a bit unclear. Under the hood, processor instructions are imperative in nature. If your code is supposed to run on a von Neumann machine, it should be eventually run as imperative code.

It may be possible to build a machine (with some specific architecture) that inherently supports those operations. In fact LispM had been designed to help run Lisp programs. While I'm not familiar with the hardware characteristics of LispM, it probably does qualify as providing some primitive operations at a more declarative level.

Mehrdad Afshari
+2  A: 

Are declarative languages constructed from imperative operators & functions?

Sometimes. It is a property of the implementation, not the language.

In the 1980s, lots of people compiled functional programs to graphs, then rewrote the graphs to simpler graphs. This computation usually involved updating the graphs in place, but otherwise it was about as a declarative as you would like. To learn more, look up "graph reduction" or read "The four-stroke reduction engine" by Chris Clack and Simon Peyton Jones.

Eventually compiler writers figured out ways to get better performance by compiling functional programs directly to native machine code. If the native machine is a typical commodity machine, this means typical imperative operations. However, if you look up the pioneering work of Professor Arvind at MIT, his group designed and built dataflow machines where the fundamental computational operations are more declarative in nature. It was great work, but all the special-purpose architectures that flourished in the 1980s have been made extinct by the great Microsoft/Intel virtuous cycle (more software -> more PCs sold -> cheaper processors -> more PCs sold -> ... -> $300 netbooks that do really cool stuff).

Norman Ramsey