views:

459

answers:

6

I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's a "basic" list of such analogies, off the top of my head:

program/definition        | proof
type/declaration          | proposition
inhabited type            | theorem/lemma
function                  | implication
function argument         | hypothesis/antecedent
function result           | conclusion/consequent
function application      | modus ponens
recursion                 | induction
identity function         | tautology
non-terminating function  | absurdity/contradiction
tuple                     | conjunction (and)
disjoint union            | disjunction (or)          -- corrected by Antal S-Z
parametric polymorphism   | universal quantification

So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list.

For example, here are some programming notions for which I'm unaware of pithy names in logic:

currying                  | "((a & b) => c) iff (a => (b => c))"
scope                     | "known theory + hypotheses"

And here are some logical concepts which I haven't quite pinned down in programming terms:

primitive type?           | axiom
set of valid programs?    | theory

Edit:

Here are some more equivalences collected from the responses:

function composition      | syllogism                -- from Apocalisp
continuation-passing      | double negation          -- from camccann
+5  A: 

Disjunction (OR) would be the type:

data Disjunction a b = Both a b | JustLeft a | JustRight b

Isomorphic to the type a * b + a + b or Either (a,b) (Either a b), which is isomorphic to (a + 1) * b + a, or Either a (Maybe a, b).

Also...

function composition      | syllogism
Apocalisp
+1 for syllogism, I forgot about that
pelotom
+6  A: 

I really like this question. I don't know a whole lot, but I do have a few things (assisted by the Wikipedia article, which has some neat tables and such itself):

  1. I think that sum types/union types (e.g. data Either a b = Left a | Right b) are equivalent to inclusive disjunction. And, though I'm not very well acquainted with Curry-Howard, I think this demonstrates it. Consider the following function:

    andImpliesOr :: (a,b) -> Either a b
    andImpliesOr (a,_) = Left a
    

    If I understand things correctly, the type says that (a ∧ b) → (a ★ b) and the definition says that this is true, where ★ is either inclusive or exclusive or, whichever Either represents. You have Either representing exclusive or, ⊕; however, (a ∧ b) ↛ (a ⊕ b). For instance, ⊤ ∧ ⊤ ≡ ⊤, but ⊤ ⊕ ⊥ ≡ ⊥, and ⊤ ↛ ⊥. In other words, if both a and b are true, then the hypothesis is true but the conclusion is false, and so this implication must be false. However, clearly, (a ∧ b) → (a ∨ b), since if both a and b are true, then at least one is true. Thus, if discriminated unions are some form of disjunction, they must be the inclusive variety. I think this holds as a proof, but feel more than free to disabuse me of this notion.

  2. Similarly, your definitions for tautology and absurdity as the identity function and non-terminating functions, respectively, are a bit off. The true formula is represented by the unit type, which is the type which has only one element (data ⊤ = ⊤; often spelled () and/or Unit in functional programming languages). This makes sense: since that type is guaranteed to be inhabited, and since there's only one possible inhabitant, it must be true. The identity function just represents the particular tautology that a → a.

    Your comment about non-terminating functions is, depending on what precisely you meant, more off. Curry-Howard functions on the type system, but non-termination is not encoded there. According to Wikipedia, dealing with non-termination is an issue, as adding it produces inconsistent logics (e.g., I can define wrong :: a -> b by wrong x = wrong x, and thus “prove” that a → b for any a and b). If this is what you meant by “absurdity”, then you're exactly correct. If instead you meant the false statement, then what you want instead is any uninhabited type, e.g. something defined by data ⊥—that is, a data type without any way to construct it. This ensures that it has no values at all, and so it must be uninhabited, which is equivalent to false. I think you could probably also use a -> b, since if we forbid non-terminating functions, then this is also uninhabited, but I'm not 100% sure.

  3. Wikipedia says that axioms are encoded in two different ways, depending on how you interpret Curry-Howard: either in the combinators or in the variables. I think the combinator view means that the primitive functions we are given encode the things we can say by default (similar to the way that modus ponens is an axiom because function application is primitive). And I think that the variable view may actually mean the same thing—combinators, after all, are just global variables which are particular functions. As for primitive types: if I'm thinking about this correctly, then I think that primitive types are the entities—the primitive objects that we're trying to prove things about.

  4. According to my logic and semantics class, the fact that (a ∧ b) → c ≡ a → (b → c) (and also that b → (a → c)) is called the exportation equivalence law, at least in natural deduction proofs. I didn't notice at the time that it was just currying—I wish I had, because that's cool!

  5. While we now have a way to represent inclusive disjunction, we don't have a way to represent the exclusive variety. We should be able to use the definition of exclusive disjunction to represent it: a ⊕ b ≡ (a ∨ b) ∧ ¬(a ∧ b). I don't know how to write negation, but I do know that ¬p ≡ p → ⊥, and both implication and falsehood are easy. We should thus able to represent exclusive disjunction by:

    data ⊥
    data Xor a b = Xor (Either a b) ((a,b) -> ⊥)
    

    This defines to be the empty type with no values, which corresponds to falsity; Xor is then defined to contain both (and) Either an a or a b (or) and a function (implication) from (a,b) (and) to the bottom type (false). However, I have no idea what this means. (Edit 1: Now I do, see the next paragraph!) Since there are no values of type (a,b) -> ⊥ (are there?), I can't fathom what this would mean in a program. Does anyone know a better way to think about either this definition or another one? (Edit 1: Yes, camccann.)

    Edit 1: Thanks to camccann's answer (more particularly, the comments he left on it to help me out), I think I see what's going on here. To construct a value of type Xor a b, you need to provide two things. First, a witness to the existence of an element of either a or b as the first argument; that is, a Left a or a Right b. And second, a proof that there are not elements of both types a and b—in other words, a proof that (a,b) is uninhabited—as the second argument. Since you'll only be able to write a function from (a,b) -> ⊥ if (a,b) is uninhabited, what does it mean for that to be the case? That would mean that some part of an object of type (a,b) could not be constructed; in other words, that at least one, and possibly both, of a and b are uninhabited as well! In this case, if we're thinking about pattern matching, you couldn't possibly pattern-match on such a tuple: supposing that b is uninhabited, what would we write that could match the second part of that tuple? Thus, we cannot pattern match against it, which may help you see why this makes it uninhabited. Now, the only way to have a total function which takes no arguments (as this one must, since (a,b) is uninhabited) is for the result to be of an uninhabited type too—if we're thinking about this from a pattern-matching perspective, this means that even though the function has no cases, there's no possible body it could have either, and so everything's OK.

A lot of this is me thinking aloud/proving (hopefully) things on the fly, but I hope it's useful. I really recommend the Wikipedia article; I haven't read through it in any sort of detail, but its tables are a really nice summary, and it's very thorough.

Antal S-Z
+1 for pointing out that Either is inclusive-or. Note that (Either a a) is a theorem (for all a).
Apocalisp
Question re. 2(b): what is the difference between a function type whose only inhabitant is non-terminating, and an uninhabited function type? For instance, if I declared a type B with no constructors, then defined a function A->B like so:fun (a:A):B := f(a)this would typecheck in a lot of languages, even though it's impossible to ever return a B. So the function is "inhabited" in one sense, but its "inhabitant" is absurd... so it's not really inhabited at all. Hope this makes some kind of sense :)
pelotom
Bottoms are not proofs. “It is absurd and impossible to suppose that the unknowable and indeterminate should contain and determine.” -- Aristoteles
Apocalisp
fantastic quote :)
pelotom
@Tom: Just to drive home the point about non-termination, **if the logic is consistent, all programs terminate**. Non-termination only occurs in type systems representing inconsistent logics, or equivalently, type systems for Turing-complete languages.
camccann
**Apocalisp:** `Either a a` shouldn't quite be a theorem: `Either ⊥ ⊥` is still uninhabited.**Tom:** As camccann said, consistency implies termination. Thus, a consistent type system won't allow you to express `f :: a -> b`, and so the type would be uninhabited; an inconsistent type system *would* have an inhabitant for the type, but one that wouldn't terminate.**camccann:** Are there inconsistent type systems which aren't Turing-complete, occupying some in-between point on the hierarchy? Or is that last step (adding general recursion or whatever) precisely equivalent to inconsistency?
Antal S-Z
@Antal S-Z: Actually, I'm not sure! Given an arbitrary logic, I don't know how you would map it onto a system of computation. If such a mapping is already known, and it preserves logical structure via a type system, then I suspect it is indeed equivalent, but I'm not certain.
camccann
+8  A: 

You're muddying things a little bit regarding nontermination. Falsity is represented by uninhabited types, which by definition can't be non-terminating because there's nothing of that type to evaluate in the first place.

Non-termination represents contradiction--an inconsistent logic. An inconsistent logic will of course allow you to prove anything, including falsity, however.

Ignoring inconsistencies, type systems typically correspond to an intuitionistic logic, and are by necessity constructivist, which means certain pieces of classical logic can't be expressed directly, if at all. On the other hand this is useful, because if a type is a valid constructive proof, then a term of that type is a means of constructing whatever you've proven the existence of.

A major feature of the constructivist flavor is that double negation is not equivalent to non-negation. In fact, negation is rarely a primitive in a type system, so instead we can represent it as implying falsehood, e.g., not P becomes P -> Falsity. Double negation would thus be a function with type (P -> Falsity) -> Falsity, which clearly is not equivalent to something of just type P.

However, there's an interesting twist on this! In a language with parametric polymorphism, type variables range over all possible types, including uninhabited ones, so a fully polymorphic type such as ∀a. a is, in some sense, almost-false. So what if we write double almost-negation by using polymorphism? We get a type that looks like this: ∀a. (P -> a) -> a. Is that equivalent to something of type P? Indeed it is, merely apply it to the identity function.

But what's the point? Why write a type like that? Does it mean anything in programming terms? Well, you can think of it as a function that already has something of type P somewhere, and needs you to give it a function that takes P as an argument, with the whole thing being polymorphic in the final result type. In a sense, it represents a suspended computation, waiting for the rest to be provided. In this sense, these suspended computations can be composed together, passed around, invoked, whatever. This should begin to sound familiar to fans of some languages, like Scheme or Ruby--because what it means is that double-negation corresponds to continuation-passing style, and in fact the type I gave above is exactly the continuation monad in Haskell.

camccann
Thanks for the correction, I've removed "falsity" as a synonym of nontermination. +1 for double-negation <=> CPS!
pelotom
I don't quite get the intuition behind representing ¬p as `P -> Falsity`. I understand why it works (¬p ≡ p → ⊥), but I don't get the code version. `P -> ⊥` should be inhabited precisely when `P` is not, right? But shouldn't this function always be inhabited? Or possible never, actually, since you can't return an instance of `⊥`? I don't quite see the conditionality of it. What's the intuition here?
Antal S-Z
@Antal S-Z: The intuition is intuitionistic logic, of course! But yes, actually writing such a function is difficult. I see in your profile that you know Haskell, so maybe you're thinking in algebraic data types and pattern matching? Consider that an uninhabited type must have **no constructors** and, thus, nothing to pattern match against. You'd have to write a "function" with no body, which isn't legal Haskell. In fact, to my knowledge, there's no way to write a term of negated type in Haskell without using runtime exceptions or non-termination.
camccann
@Antal S-Z: On the other hand, if the equivalent logic is consistent, all functions must be total, e.g., all pattern matching must be exhaustive. So in order to write a function with no patterns, the parameter type must have no constructors, e.g., be uninhabited. Therefore, such a function would be legal--and thus its own type inhabited--precisely and only when its argument is uninhabited. Hence, a function `P -> Falsity` is equivalent to `P` being false.
camccann
Aha, I think I get it. The version I had been entertaining was something like `f x = x`, which would be instantiable iff `P = ⊥`, but that clearly wasn't generic enough. So the idea is that to *return* a valueless type, you need no body; but for the function to be definable and total, you need no *cases*, and so if `P` is uninhabited, everything works out? That's a little wonky, but I think I see it. That seems to interact rather oddly with my definition of the `Xor` type… I'll have to think about that. Thanks!
Antal S-Z
Antal S-Z: "P -> ⊥ should be inhabited precisely when P is not" -- Yes! There is exactly one function which inhabits P -> ⊥, and it is the polymorphic identity function applied to ⊥, e.g. ((forall A. A->A) ⊥) = ⊥ -> ⊥.
pelotom
@Antal S-Z: Actually, the type of `f x = x` is *too* generic--namely, `∀a. a -> a`. As logic, that's clearly a tautology; as a program, it's the identity function. If you apply it to a false argument, you get falsity back--but where are you going to find a term of an uninhabited type?
camccann
Tom: That precise statement can't be true, because you're overly constraining the types involved: if we have `data ⊥1` and `data ⊥2`, then we can't prove that `⊥1 -> ⊥2` doing that, even though we should be able to. Something like that was originally my first thought too :) Also, you'll never be able to apply the identity function to an object of type `⊥1` or `⊥2`, since those values don't exist.
Antal S-Z
Camcann: And actually, that's what I meant by "not generic enough"; `f x = x` only functions from `⊥1` to itself, and not between any two bottom types. I see why you could instead look at it as being too generic, though, and definitely why it doesn't work.
Antal S-Z
Antal S-Z: data ⊥1 and data ⊥2 are the same type, up to isomorphism. They differ only in name, not in meaning. And the fact that you can never apply a function does not mean that the function does not exist. In fact, the fact that you can never apply id⊥ is precisely what makes its existence possible!
pelotom
@Antal S-Z: Well, uninhabited types are equivalent up to isomorphism anyway (i.e. initial objects in a category of types and functions). Empty pattern matches would allow terms proving that equivalence, though. It's [actually been proposed for Haskell](http://hackage.haskell.org/trac/ghc/ticket/2431), in fact.
camccann
I understand that they're the same up to isomorphism; my point is that using `id` doesn't allow you to *show* that, it requires you to already know it. The no-argument-no-body case, on the other hand, *does* allow you to show it, as was just said. And Tom, I seem to have misunderstood your comment about "polymorphic identity function applied to ⊥"; disregard my comment, then, please :)
Antal S-Z
+3  A: 

While it's not a simple isomorphism, this discussion of constructive LEM is a very interesting result. In particular, in the conclusion section, Oleg Kiselyov discusses how the use of monads to get double-negation elimination in a constructive logic is analogous to distinguishing computationally decidable propositions (for which LEM is valid in a constructive setting) from all propositions. The notion that monads capture computational effects is an old one, but this instance of the Curry--Howard isomorphism helps put it in perspective and helps get at what double-negation really "means".

wren ng thornton
+3  A: 

Your chart is not quite right; in many cases you have confused types with terms.

function type              implication
function                   proof of implication
function argument          proof of hypothesis
function result            proof of conclusion
function application RULE  modus ponens
recursion                  n/a [1]
structural induction       fold (foldr for lists)
mathematical induction     fold for naturals (data N = Z | S N)
identity function          proof of A -> A, for all A
non-terminating function   n/a [2]
tuple                      normal proof of conjunction
sum                        disjunction
n/a [3]                    first-order universal quantification
parametric polymorphism    second-order universal quantification
currying                   (A,B) -> C -||- A -> (B -> C), for all A,B,C
primitive type             axiom
types of typeable terms    theory
function composition       syllogism
substitution               cut rule
value                      normal proof

[1] The logic for a Turing-complete functional language is inconsistent. Recursion has no correspondence in consistent theories. In an inconsistent logic/unsound proof theory you could call it a rule which causes inconsistency/unsoundness.

[2] Again, this is a consequence of completeness. This would be a proof of an anti-theorem if the logic were consistent -- thus, it can't exist.

[3] Doesn't exist in functional languages, since they elide first-order logical features: all quantification and parametrization is done over formulae. If you had first-order features, there would be a kind other than *, * -> *, etc.; the kind of elements of the domain of discourse. For example, in Father(X,Y) :- Parent(X,Y), Male(X), X and Y range over the domain of discourse (call it Dom), and Male :: Dom -> *.

Frank Atanassow
[1] - yes, I should've been more specific. I meant "structural recursion" rather than unconstrained recursion, which I guess is the same as "fold". [3] - it does exist in dependently-typed languages
pelotom
+5  A: 

Since you explicitly asked for the most interesting and obscure ones:

You can extend C-H to many interesting logics and formulations of logics to obtain a really wide variety of correspondences. Here I've tried to focus on some of the more interesting ones rather than on the obscure, plus a couple of fundamental ones that haven't come up yet.

evaluation             | proof normalisation/cut-elimination
variable               | assumption
S K combinators        | axiomatic formulation of logic   
pattern matching       | left-sequent rules 
subtyping              | implicit entailment (not reflected in expressions)
intersection types     | implicit conjunction
union types            | implicit disjunction
open code              | temporal next
closed code            | necessity
effects                | possibility
reachable state        | possible world
monadic metalanguage   | lax logic
non-termination        | truth in an unobservable possible world
distributed programs   | modal logic S5/Hybrid logic
meta variables         | modal assumptions
explicit substitutions | contextual modal necessity
pi-calculus            | linear logic

EDIT: A reference I'd recommend to anyone interested in learning more about extensions of C-H:

"A Judgmental Reconstruction of Modal Logic" http://www.cs.cmu.edu/~fp/papers/mscs00.pdf - this is a great place to start because it starts from first principles and much of it is aimed to be accessible to non-logicians/language theorists. (I'm the second author though, so I'm biased.)

RD1
thanks for providing some less trivial examples (that really was the spirit of the original question), although I admit several of these are over my head... are the terms "necessity" and "possibility" precisely defined in logic? how do they translate to their computational equivalents?
pelotom
I can point to published papers for each of these, so they are precisely defined. Modal logic is much studied (since Aristotle) and relates different modes of truth - "A is necessarily true" means "in every possible world A is true", while "A is possibly true" means "A is true in one possible world". You can prove things like "(necessarily(A -> B) and possibly A) -> possibly B". The modal inference rules directly yield the expression constructs, typing, and reduction rules, as usual in C-H. See:http://en.wikipedia.org/wiki/Modal_logic and http://www.cs.cmu.edu/~fp/papers/mscs00.pdf
RD1
@pelotom: You may want to read a bit about [other kinds of logic](http://en.wikipedia.org/wiki/Non-classical_logic). Plain classical logic often isn't useful in this context--I mentioned intuitionistic logic in my answer, but [modal](http://en.wikipedia.org/wiki/Modal_logic) and [linear](http://en.wikipedia.org/wiki/Linear_logic) logic are even "weirder", but also really awesome.
camccann
Thanks for the pointers, sounds like I have some reading to do!
pelotom
@camccann: It's classical logic that's weird! :-) A week ago I had to do short proof in a classical logic for an equivalence in a paper, and I just couldn't get my head around the classical part and eventually I had to ask a colleague for help and he reminded me of how to do De Morgen's for implication.
RD1
@RD1: You think that's bad, I've spent so much time thinking in Haskell that I have to mentally translate formulas of predicate logic into type signatures before they make sense. :( Not to mention that the law of the excluded middle and such starts to seem really confusing and perhaps suspicious.
camccann