There is a blog post somewhere with a type-level implementation of the SKI combinator calculus, which is known to be Turing-complete.
Turing-complete type systems have basically the same benefits and drawbacks that Turing-complete languages have: you can do anything, but you can prove very little. In particular, you cannot prove that you will actually eventually do something.
One example of type-level computation are the new type-preserving collection transformers in Scala 2.8. In Scala 2.8, methods like map
, filter
and so on are guaranteed to return a collection of the same type that they were called on. So, if you filter
a Set[Int]
, you get back a Set[Int]
and if you map
a List[String]
you get back a List[Whatever the return type of the anonymous function is]
.
Now, as you can see, map
can actually transform the element type. So, what happens if the new element type cannot be represented with the original collection type? Example: a BitSet
can only contain fixed-width integers. So, what happens if you have a BitSet[Short]
and you map each number to its string representation?
someBitSet map { _.toString() }
The result would be a BitSet[String]
, but that's impossible. So, Scala chooses the most derived supertype of BitSet
, which can hold a String
, which in this case is a Set[String]
.
All of this computation is going on during compile time, or more precisely during type checking time, using type-level functions. Thus, it is statically guaranteed to be type-safe, even though the types are actually computed and thus not known at design time.