Assuming n2
is of some built-in type, the cast to int &
type performs the reinterpretation of lvalue n1
(whatever type it had) as an lvalue of type int
.
In the context of int n2 = (int &) n1
declaration, if n1
is by itself an lvalue of type int
, the cast is superfluous, it changes absolutely nothing. If n1
is an lvalue of type const int
, then the cast simply casts away the constness, which is also superfluous in the above context. If n1
is an lvalue of some other type, the cast simply reinterprets memory occupied by n1
as an object of type int
(this is called type punning). If n1
is not an lvalue the code is ill-formed.
So, in the code like int n2 = (int&) n1
the cast to int &
is only non-redundant (has some actual effect) when it does type punning, i.e when n1
is an lvalue of some other type (not int
). For example
float n1 = 5.0;
int n2 = (int &) n1;
which would be equivalent to
int n2 = *(int *) &n1;
and to
int n2 = *reinterpret_cast<int *>(&n1);
Needless to say, this is a pretty bad programming practice.
This is, BTW, one of the cases when using a dedicated C++-style cast is strongly preferred. If the author of the code used reinterpret_cast
instead of C-style cast, you probably wouldn't have to ask this question.
Of course, if n1
itself is of type int
, there's no meaningful explanation for this cast. In that case it is, again, completely superfluous.
P.S. There's also a possibility that n2
is a class with overloaded conversion operator to int &
type, which is a different story entirely... Anyway, you have to tell what n2
is when you ask questions like that.