I've been thinking about how type inference works in the following OCaml program:
let rec f x = (g x) + 5
and g x = f (x + 5);;
Granted, the program is quite useless (looping forever), but what about the types? OCaml says:
val f : int -> int = <fun>
val g : int -> int = <fun>
This would exactly be my intuition, but how does the type inference algorithm know this?
Say the algorithm considers "f" first: the only constraint it can get there is that the return type of "g" must be "int", and therefore its own return type is also "int". But it cannot infer the type of its argument by the definition of "f".
On the other hand, if it considers "g" first, it can see that the type of its own argument must be "int". But without having considered "f" before, it can't know that the return type of "g" is also "int".
What is the magic behind it?