views:

2641

answers:

8

Is there a one line macro definition to determine the endianness of the machine. I am using the following code but converting it to macro would be too long.

unsigned char test_endian( void )
{
    int test_var = 1;
    unsigned char test_endian* = (unsigned char*)&test_var;

    return (test_endian[0] == NULL);
}
+2  A: 

Use an inline function rather than a macro. Besides, you need to store something in memory which is a not-so-nice side effect of a macro.

You could convert it to a short macro using a static or global variable, like this:

static int s_endianess = 0;
#define ENDIANESS() ((s_endianess = 1), (*(unsigned char*) &s_endianess) == 0)
+2  A: 

There is no standard, but on many systems including <endian.h> will give you some defines to look for.

Ignacio Vazquez-Abrams
+6  A: 
Norman Ramsey
I like this because it acknowledges the existence of endianness other than little and big.
Alok
Speaking of which, it might be worth calling the macro INT_ENDIANNESS, or even UINT32_T_ENDIANNESS, since it only tests the storage representation of one type. There's an ARM ABI where integral types are little-endian, but doubles are middle-endian (each word is little-endian, but the word with the sign bit in it comes before the other word). That caused some excitement among the compiler team for a day or so, I can tell you.
Steve Jessop
+5  A: 

If you want to only rely on the preprocessor, you have to figure out the list of predefined symbols. Preprocessor arithmetics has no concept of addressing.

GCC defines __LITTLE_ENDIAN__ or __BIG_ENDIAN__

$ gcc -E -dM - < /dev/null |grep ENDIAN
#define __LITTLE_ENDIAN__ 1

Then, you can add more preprocessor conditional directives based on platform detection like #ifdef _WIN32 etc.

Gregory Pakosz
GCC 4.1.2 on Linux doesn't appear to define those macros, although GCC 4.0.1 and 4.2.1 define them on Macintosh. So it's not a reliable method for cross-platform development, even when you're allowed to dictate which compiler to use.
Rob Kennedy
A: 

Whilst there is no portable #define or something to rely upon, platforms do provide standard functions for converting to and from your 'host' endian.

Generally, you do storage - to disk, or network - using 'network endian', which is BIG endian, and local computation using host endian (which on x86 is LITTLE endian). You use htons() and ntohs() and friends to convert between the two.

Will
+1  A: 

Try this:

#include<stdio.h>        
int x=1;
#define TEST (*(char*)&(x)==1)?printf("little endian"):printf("Big endian")
int main()
{

   TEST;
}
Prasoon Saurav
+2  A: 

If you desperately want to use the preprocessor, you could abuse string literals:

#include <stdint.h>

#define IS_BIG_ENDIAN (*(uint16_t *)"\0\xff" < 0x100)

In general though, you should try to write code that does not depend on the endianness of the host platform.

caf
couldn't this be optimized using the terminator? :)
jbcreix
"you should try to write code that does not depend on the endianness of the host platform". Unfortunately my plea, "I know we're writing a POSIX compatibility layer, but I don't want to implement ntoh, because it depends on the endianness of the host platform" always fell on deaf ears ;-). Graphics format handling and conversion code is the other main candidate I've seen - you don't want to base everything off calling ntohl all the time.
Steve Jessop
You can implement `ntohl` in a way that does not depend on the endianness of the host platform.
caf
Not efficiently. At least, not with the compilers we had at the time (GCCE and a bunch of others). They tended to do something in the case where the platform was big-endian.
Steve Jessop
+6  A: 

Code supporting arbitrary byte orders, ready to be put into a file called order32.h:

#ifndef ORDER32_H
#define ORDER32_H

#include <limits.h>
#include <stdint.h>

#if CHAR_BIT != 8
#error "unsupported char size"
#endif

enum
{
    O32_LITTLE_ENDIAN = 0x03020100ul,
    O32_BIG_ENDIAN = 0x00010203ul,
    O32_PDP_ENDIAN = 0x01000302ul
};

static const union { unsigned char bytes[4]; uint32_t value; } o32_host_order =
    { { 0, 1, 2, 3 } };

#define O32_HOST_ORDER (o32_host_order.value)

#endif

You would check for little endian systems via

O32_HOST_ORDER == O32_LITTLE_ENDIAN
Christoph
Nice, and very concise.
bta
This doesn't let you *decide* endian-ness until runtime though. The following fails to compile because. /** isLittleEndian::result --> 0 or 1 */ struct isLittleEndian { enum isLittleEndianResult { result = (O32_HOST_ORDER == O32_LITTLE_ENDIAN) }; };