views:

115

answers:

5

I'd like to add compile time asserts into the following C++ code (compiled with Visual C++ 9):

//assumes typedef unsigned char BYTE;
int value = ...;
// Does it fit into BYTE?
if( 0 <= value && value <= UCHAR_MAX ) {
    BYTE asByte = static_cast<BYTE>( value );
    //proceed with byte
} else {
    //proceed with greater values
}

The problem is UCHAR_MAX and BYTE are independent typedefs and when this code is ported it can happen that they get out of sync and code will break. So I wanted to do something like this:

compileTimeAssert( sizeof( BYTE ) == sizeof( UCHAR_MAX ) );

but VC++9 produces "negative subscript" error while compiling that - sizeof( UCHAR_MAX ) happens to be 4, not 1.

How can I achieve the compile-time check I want?

+2  A: 

You can test in the compile-time assert that ( (1 << (sizeof(BYTE)*CHAR_BIT)) - 1 ) == UCHAR_MAX.

(I assume that you're not asking how to do a static assert - there are several ways, see here)

adamk
Cool. Should be `CHAR_BIT` instead of 8.
sharptooth
@sharptooth - corrected.
adamk
For `BYTE` this is probably fine, but for larger types won't this overflow?
bk1e
+1  A: 

Use BOOST_STATIC_ASSERT.

Pontus Gagge
+5  A: 

compare 'value' with std::numeric_limits< BYTE >::max() instead of UCHAR_MAX

usta
You can't use `std::numeric_limits< BYTE >::max()` in a constant expression, which any static_assert implementation requires. PS I suppose that wasn't really addressing the "static assertion" part of the question.
Alex B
True. But static assert is no longer required if you test 'value' against std::numeric_limits< BYTE >::max(). sharptooth wanted to have a static assert only to verify that max value tested against is the correct one for the BYTE type. With numeric_limits, you don't have to worry about that, because it is guaranteed it'll give you the correct max value.
usta
Ah, yes, you are right, the compile assert is redundant in this case.
Alex B
+1  A: 

but VC++9 produces "negative subscript" error while compiling that - sizeof( UCHAR_MAX ) happens to be 4, not 1.

With this post I am not answering in terms of the solution, but trying to get to the root cause.

Consider the expression

#define MAX 255;

My understanding is that sizeof(MAX) a.k.a (sizeof(255)) is always going to be always the size of the integer literal on the given platform as per the given rules in the Standard 2.3.1/2. Just because it is UCHAR_MAX and holds the max value of a char does not mean that the size of such a name will be the size of char

The type of an integer literal depends on its form, value, and suffix. If it is decimal and has no suffix, it has the first of these types in which its value can be represented: int, long int; if the value cannot be represented as a long int, the behavior is undefined. If it is octal or hexadecimal and has no suffix, it has the first of these types in which its value can be represented: int, unsigned int, long int, unsigned long int. If it is suffixed by u or U, its type is the first of these types in which its value can be represented: unsigned int, unsigned long int. If it is suffixed by l or L, its type is the first of these types in which its value can be represented: long int, unsigned long int. If it is suffixed by ul, lu, uL, Lu, Ul, lU, UL, or LU, its type is unsigned long int.

So the expectation of it being 1 needs to be rechecked which seems to the root cause here. sizeof(MAX) will be one only on those architectures where int is as wide as a char. I am not sure how many such systems are really out there.

Chubsdad
+1  A: 

UCHAR_MAX is maximal value of unsigned char. Size of unsigned char is always 1 byte. If you want to check if range of BYTE is 0 .. UCHAR_MAX or -(UCHAR_MAX/2+1) .. UCHAR_MAX/2 then simply check if sizeof(BYTE) == 1

If you want to check if some int value fits into BYTE then do:

if (!(value & ~(BYTE)-1)) ...
adf88