In object-oriented programming, we might say the core concepts are:
- encapsulation
- inheritance,
- polymorphism
What would that be in functional programming?
In object-oriented programming, we might say the core concepts are:
What would that be in functional programming?
In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Functional programming has its roots in the lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as embellishments to the lambda calculus. - Wikipedia
In a nutshell,
Abstraction, the process of making a function by parameterizing over some part of an expression.
Application, the process of evaluating a function by replacing its parameters with specific values.
At some level, that's all there is to it.
Not directly an answer to your question, but I'd like to point out that "object-oriented" and functional programming aren't necessarily orthogonal. The "core concepts" you cite have more general counterparts which apply just as well to functional programming.
Encapsulation, more generally, is modularisation. All purely functional languages that I know of support modular programming. You might say that those languages implement encapsulation better than the typical "OO" variety, since side-effects break encapsulation, and pure functions have no side-effects.
Inheritance, more generally, is logical implication, which is what a function represents. The canonical subclass -> superclass
relation is a kind of implicit function. In functional languages, this is expressed with type classes or implicits (I consider implicits to be the more general of these two).
Polymorphism in the "OO" school is achieved by means of subtyping (inheritance). There is a more general kind of polymorphism known as parametric polymorphism (a.k.a. generics), which you will find to be supported by pure-functional programming languages. Additionally, some support "higher kinds", or higher-order generics (a.k.a. type constructor polymorphism).
What I'm trying to say is that your "core concepts of OO" aren't specific to OO in any way. I, for one, would argue that there aren't any core concepts of OO, in fact.
There's no community consensus on what are the essential concepts in functional programming. In Why Functional Programming Matters, John Hughes argues that they are higher-order functions and lazy evaluation. In Wearing the Hair Shirt: A Retrospective on Haskell, Simon Peyton Jones says the real essential is not laziness but purity. Richard Bird would agree. But there's a whole crowd of Scheme and ML programmers who are perfectly happy to write programs with side effects.
As someone who has practiced and taught functional programming for twenty years, I can give you a few ideas thare are widely believed to be at the core of functional programming:
Nested, first-class functions with proper lexical scoping are at the core. This means you can create an anonymous function at run time, whose free variables may be parameters or local variables of an enclosing function, and you get a value you can return, put into data structures, and so on. (This is the most important form of higher-order functions, but some higher-order functions (like qsort
!) can be written in C, which is not a functional language.)
Means of composing functions with other functions to solve problems. Nobody does this better than John Hughes.
Many functional believe purity (freedom from effects, including mutation, I/O, and exceptions) is at the core of functional programming. Many functional programmers do not.
Polymorphism, whether it is enforced by the compiler or not, is a core value of functional programmers. Confusingly, C++ programmers call this concept "generic programming." When polymorphism is enforced by the compiler it is generally a variant of Hindley-Milner, but the more powerful System F is also a powerful basis for functional languages. And with languages like Scheme, Erlang, and Lua, you can do functional programming without a static type system.
Finally, a large majority of functional programmers beleive in the value of inductively defined data types, sometimes called "recursive types". In languages with static type systems these are generally known as "algebraic data types", but you will find inductively defined data types even in material written for beginning Scheme programmers. Inductively defined types usually ship with a language feature called pattern matching, which supports a very general form of case analysis. Often the compiler can tell you if you have forgotten a case. I wouldn't want to program without this language feature. (A luxury once sampled becomes a necessity.)
Let me repeat the answer I gave at one discussion in the Bangalore Functional Programming group:
A functional program consists only of functions. Functions compute values from their inputs. We can contrast this with imperative programming, where as the program executes, values of mutable locations change. In other words, in C or Java, a variable called X refers to a location whose value change. But in functional programming X is the name of a value (not a location). Any where that X is in scope, it has the same value (i.e, it is referentially transparent). In FP, functions are also values. They can be passed as arguments to other functions. This is known as higher-order functional programming. Higher-order functions let us model an amazing variety of patterns. For instance, look at the map function in Lisp. It represents a pattern where the programmer needs to do 'something' to every element of a list. That 'something' is encoded as a function and passed as an argument to map.
As we saw, the most notable feature of FP is it's side-effect freeness. If a function does something more than computing a value from it's input, then it is causing a side-effect. Such functions are not allowed in pure FP. It is easy to test side-effect free functions. There is no global state to set-up before running the test and there is no global state to check after running the test. Each function can be tested independently just by providing it's input and examining the return value. This makes it easy to write automated tests. Another advantage of side-effect freeness is that it gives you better control on parallelism.
Many FP languages treat recursion and iteration correctly. They does this by supporting something called tail-recursion. What tail-recursion is - if a function calls itself, and it is the last thing it does, it removes the current stack frame right away. In other words, if a function calls itself tail-recursively a 1000 times, it does not grow the stack a 1000 deep. This makes special looping constructs unnecessary in these languages.
Lambda Calculus is the most boiled down version of an FP language. Higher level FP languages like Haskell get compiled to Lambda Calculus. It has only three syntactic constructs but still it is expressive enough to represent any abstraction or algorithm.
My opinion is that FP should be viewed as a meta-paradigm. We can write programs in any style, including OOP, using the simple functional abstractions provided by the Lambda Calculus.
Thanks, -- Vijay
Original discussion link: http://groups.google.co.in/group/bangalore-fp/browse_thread/thread/4c2cfa7985d7eab3