tags:

views:

239

answers:

2

[title] or is 5 a literal, and -5 is an expression with unary minus taking a literal as an argument? The question arose when I was wondering how to hardcode smallest signed integer values. TIA.

+9  A: 

It's a unary minus followed by 5 as an integer literal. Yes, that makes it somewhat difficult to represent the smallest possible integer in twos complement.

Jerry Coffin
@Jerry Coffin: If types 'long' and 'unsigned long' are 32-bits, would a standards-conforming compiler regard -2147483648 as an unsigned long equal to 2147483648, an error, or Undefined Behavior?
supercat
@supercat: in C++ it's undefined, but then in C++ with the sizes you specify, `2147483648` is undefined (2.13.1/2) - you *must* provide a suffix `2147483648u` or get help from your implementation. You'd hope that the compiler will do you a favour, and treat `2147483648` as an unsigned long, or maybe a `long long` if supported, or an error if all else fails: but it doesn't have to. In C99 (again with those sizes), `-2147483648` is a negative `long long`. I think in C89 it's an `unsigned long`, causing no end of entertainment when migrating from one to the other.
Steve Jessop
In C99 with 32-bit `int`, `-2147483648` has type `unsigned int` and its value is 2147483648. There is no reason it would become `long long` because 2147483648 fits in `unsigned int`, and then the unary negation operator is applied to it.
R..
@R.: In C99, without the `u` suffix, a decimal integer constant never has an unsigned type (at least that's what the table at 6.4.4.1/5 says; is there somewhere else that says otherwise?)
James McNellis
@R.: @James McNellis: At least in C89, decimal constants do not become unsigned //unless// there is no signed type large enough to hold them; hexadecimal constants are signed //unless// they fall between the range of the maximum signed value of a certain size and the maximum unsigned value of that size. @Steve Jessop: In C99, what would be the effect of -9223372036854775808 in the source, if long long is 64 bits?
supercat
@Steve Jessop: Hex constants' signed/unsigned behavior is more "interesting" than decimal. With 16-bit integers, (-1 < 0x7FFF) and (-1 < 0x10000) but (-1 > 0x8000). There's an annoying gotcha which is present in many languages, though, which is that trying to 'and' a long variable with the complement of an integer type will fail if the msb of the integer is set, regardless of whether the integer is signed or unsigned. I wonder why modern languages don't fix that (e.g. by using "as-if long" rules).
supercat
@supercat: (1) Um. Good question. 6.4.4.1/6 says, "the integer constant has no type". I don't know what that means, whether it's UB or whether it means the value can be safely converted to an unsigned type using the usual modulus rules. (2) I don't know, in C and C++ I think it's a messy compromise between sane behaviour vs. compatibility, and both lose. Just don't type big numbers, it's not worth the hassle. In "modern" languages, integer literals should probably be as big as you like using some kind of BigInt type, perhaps with a warning if you then coerce them to a fixed-size type.
Steve Jessop
... or do any logical ops with fixed-size types of different sizes. C is a bit too weakly typed, I think. Being able to pass an `int` when a `long` is expected (them being different sizes) isn't too bad, but the usual arithmetic conversions are just *hard*.
Steve Jessop
+3  A: 

As Jerry Coffin said, the minus sign is not part of the literal. As for how to solve your ultimate question,

I was wondering how to hardcode smallest signed integer values

That's what INT_MIN (and the like in limits.h or stdint.h or wherever) is for.

If you look at how INT_MIN is defined, it'll probably look something like (-2147483647 - 1) to work around the problem raised by the question.

Michael Burr
Thanks. I already did. Actually I am not running into any problem. I was just wondering :))
Armen Tsirunyan
@Armen: Understood. I guess I should say "the problem raised by the question" then.
Michael Burr