A simple expression like
(x) - y
is interpreted differently depending on whether x
is a type name or not. If x
is not a type name, (x) - y
just subtracts y
from x
. But if x
is a type name, (x) - y
computes the negative of y
and casts the resulting value to type x
.
In a typical C or C++ compiler, the question of whether x
is a type or not is answerable because the parser communicates such information to the lexer as soon as it processes a typedef or struct declaration. (I think that such required violation of levels was the nastiest part of the design of C.)
But in Java, x
may not be defined until later in the source code. How does a Java compiler disambiguate such an expression?
It's clear that a Java compiler needs multiple passes, since Java doesn't require declaration-before-use. But that seems to imply that the first pass has to do a very sloppy job on parsing expressions, and then in a later pass do another, more accurate, parse of expressions. That seems wasteful.
Is there a better way?