tags:

views:

406

answers:

11

Hi, I have a simple code

char t = (char)(3000);

Then value of t is -72. The hex value of 3000 is 0xBB8. I couldn't understand why the value of t is -72.

Thanks for your answers.

A: 

i've been thought in school that there is no need to convert from int to char in C, cuz they are kinda the same :S

Omu
They are. The `(char)` in the question is unnecessary. However, `int` and `char` have different sizes.
Artelius
+4  A: 

A char is 8 bits (which can only represent a 0-255 range). Trying to cast 3000 to a char is... impossible impossible, at least for what you are intending.

brianreavis
Trying to fit 3000 in a char is impossible. Trying to cast 3000 to a char is quite possible. You're just likely to get a truncated result.
Artelius
@Artelius: There's no difference in the result with cast or without cast. The actual conversion is the same in both cases, and the result is implementation-defined.
AndreyT
@AndreyT: I meant that 3000 simply does not *fit* in an 8-bit char. But there's nothing stopping you from *casting* 3000 to a char, with the result being implementation-defined. Saying that a "cast" is impossible is wrong.
Artelius
*Casting* 3000 to a `char` is, *for what the user is intending to do*, impossible. Sorry for being ambiguous.
brianreavis
+3  A: 

This is happening because 3000 is too big a value and causes an overflow. Char is generally from -128 to 127 signed, or 0 to 255 unsigned, but it can change depending upon the implementation.

Justin Johnson
Depending on the implementation, `char` can have absolutely any range. There's no guarantee like "-128 to 127 signed, or 0 to 255 unsigned". That's the *minimum* ranges of `char`.
AndreyT
This is true, and I'll edit my answer to reflect it; however, I didn't feel that it was appropriate mention this since the OP is obviously a novice and encountering this concept for the first time and further detail would have muddled the answer.
Justin Johnson
A: 

oh, i get it, it's overflow, it's like char is only from -256 to 256 or something like that i'm not sure, like if you have a var which type's max limit is 256 and you add 1 to it, than it becomes -256 and so on

Omu
char is -128 to 127 not -256 to 256.
246tNt
No one knows the range of `char` without analyzing a particular implementation. So, plase stop saying that range of `char` is -128 to 127. It isn't in general case. `char` can actually be unsigned.
AndreyT
@AndreyT: True, but `char`s are very often -128 to 127, and very rarely (in practice never) -256 to 256.
Artelius
@Artelius: There are embedded C platforms out there with 4-byte char with range same as `int`. In practice, that is.
AndreyT
@AndreyT: I know. Given that the OP got the result -72 I don't think he's using one of them, though.
Artelius
`4-byte char`? By definition a `char` is 1 byte! I think you meant `32-bit char` lol ... and the range of `char` is either `SCHAR_MIN` to `SCHAR_MAX` or 0 to `UCHAR_MAX`
pmg
char is not 1 byte by definition. It's the smallest addressable unit. The standard says it should be at _LEAST_ a byte. That's basically true for every other type.
Pod
@pmg: I think you're thinking of C++
Artelius
A: 

A char is (typically) just 8 bits, so you cant store values as large as 3000 (which would require at least 11 12 bits). So if you trie to store 3000 in a byte, it will just wrap.

Since 3000 is 0xBBA, it requires two bytes, one 0x0B and one which is 0xBA. If you try to store it in a single byte, you will just get one of them (0xBA). And since a byte is (typically) signed, that is -72.

Rasmus Kaj
Actually you'd need 12 bits.
246tNt
Yes, Sorry. Or even 13 since both int and char are usually signed, so you need an initial '0' to say its positive.
Rasmus Kaj
+6  A: 

0xB8 as a signed char is -72 in decimal. Casting the int (0x0BB8) to a char is stripping off the high bits and leaving the least significant 8 bits (0xB8).

J. John
+2  A: 

char is an integral type with certain range of representable values. int is also an integral type with certain range of representable values. Normally, range of int is [much] wider than that of char. When you try to squeeze into a char an int value that doesn't fit into the range of char, the value will not "fit", of course. The actual result is implementation-defined.

In your case 3000 is an int value that doesn't' fit into the range of char on your implementation. So, you won't get 3000 as the result. If you really want to know why it specifically came out as -72 - consult the documentation that came with your implementation.

AndreyT
+12  A: 

The hex value of 3000 is 0xBB8.

And so the hex value of the char (which, by the way, appears to be signed on your compiler) is 0xB8.

If it were unsigned, 0xB8 would be 184. But since it's signed, its actual value is 256 less, i.e. -72.

If you want to know why this is, read about two's complement notation.

Artelius
A: 

char is used to hold a single character, and you're trying to store a 4-digit int in one. Perhaps you meant to use an array of chars, or string (char t[4] in this case).

To convert an int to a string (untested):

#include <stdlib.h>

int main() {
    int num = 3000;
    char numString[4];
    itoa(num, buf, 10);
}
igul222
Don't post untested code, particularly if you're not up to scratch on the subject. The explanation is misleading and the code is wrong.
Artelius
+1  A: 

As specified, the 16-bit hex value of 3000 is 0x0BB8. Although implementation specific, from your posted results this is likely stored in memory in 8-bit pairs as B8 0B (some architectures would store it as 0B B8. This is known as endianness.)

char, on the other hand, is probably not a 16-bit type. Again, this is implementation specific, but from your posted results it appears to be 8-bits, which is not uncommon.

So while your program has allocated 8-bits of memory for your value, you're storing twice as much information in that memory. When your program retrieves this value later, it will only be pulling the first stored octet, in this case B8. The 0B will be ignored, and may cause problems later down the line if it ended up overwriting something important. This is known as a buffer overflow, which is very bad.

Assuming two's complement (technically implementation specific, but a reasonable assumption), the hex value of B8 translates to either -72 or 184 in decimal, depending on whether your dealing with a signed or unsigned type. Since you didn't specify either, your compiler will go with it's default. Yet again, this is implementation specific, and it appears your compiler goes with signed char.

Therefore, you get -72. But don't expect the same results on any other system.

goldPseudo
A: 

I don't know about Mac. So my result is -72. As I know, MAC is using Big Endian, so does it affect the result? I dont have any MAC computer to test so I want to know from MAC people.