i have following code
#include <iostream>
using namespace std;
int main(){
int i;
char *p="this is a string";
i=reinterpret_cast<int>(p);
cout<<i<<"\n":
return 0;
}
output is: 7648 please explain reinterpret_cast
i have following code
#include <iostream>
using namespace std;
int main(){
int i;
char *p="this is a string";
i=reinterpret_cast<int>(p);
cout<<i<<"\n":
return 0;
}
output is: 7648 please explain reinterpret_cast
You showed a good example where not to use reinterpret_cast<>
.
You can't cast a char*
to a int
. This has an undefined behavior.
Returning an odd result is just one possibility of this undefined behavior. It could as well have launched a missile to the united states.
There isn't much to explain. The result of reinterpret_cast
is platform-specific. What you're test outputs is likely either the address of the string constant or whatever remains of it when it's coerced into an int
.
reinterpret_cast
causes the data that you are casting to be considered as a different type, without any conversion being performed. According to the standard, the only thing that is defined behavior after a reinterpret_cast
is to reinterpret_cast
it back to the original type.
This sounds a little useless, but one reason you might want to use this is, for example, using certain C libraries (e.g. pthreads) in C++. A lot of C libraries that involve callbacks pass a parameter of type void*
to the callback. In C++, the proper way to deal with this is to take a pointer to whatever you want to use as the parameter, reinterpret_cast
it to void*
when passing it into the C library, and then reinterpret_cast
it back to whatever it actually is inside the callback.
While it is not guranteed to be defined behavior by the standard, on most platforms/compilers, what a reinterpret_cast
does is simply consider the data to be a different type, using the same bit pattern. For example, if you have a 32-bit double d
whose bit pattern happens to be 01101010 00111100 01101010 01000001
, and you write int i = reinterpret_cast<int>(d)
, then i
will be an integer whose bit pattern 01101010 00111100 01101010 01000001
, even though that will represent a wildly different numeric value as an integer than it did as a double. And obviously, this can get you into trouble quickly if the types involved are not the same size.