views:

326

answers:

4

How does a C program determine, at RUN time (not compile time), whether it's running on Little-Endian or Big-Endian CPU?

The reason why it must be "run-time" check, not "complie-time", is because I'm building the program in MAC OSX's Universal Binary format, using my MAC with Intel-CPU. And this program is expected to run on both Intel and Power-PC CPU's. ie, through the Universal Binary format on MAC, I wanna build a program using Intel-CPU and run it under PPC CPU.

The logic in my program that needs the CPU check is the host-to-network-byte-order-changing function for 64bit integers. Right now I have it blindly swap the byte orders, which works ok on Intel-CPU, but breaks on PPC. Here's the C function:

unsigned long long
hton64b (const unsigned long long h64bits) {
   // Low-order 32 bits in front, followed by high-order 32 bits.
   return (
       (
        (unsigned long long)
        ( htonl((unsigned long) (h64bits & 0xFFFFFFFF)) )
       ) << 32
      )
      |
      (
       htonl((unsigned long) (((h64bits) >> 32) & 0xFFFFFFFF))
      );
}; // hton64b()

Any better way of doing this in a cross-platform way?

Thanks

A: 

Do you realize that universal binaries on the mac are compiled multiple times, once for each architecture? I imagine that when you talk about compile time, you're referring to using your configure/make system to notify the source.... Just use gcc constants (like LITTLE_ENDIAN )

Douglas Mayle
+1  A: 

Don't bother checking; just use hton* wherever you need a network-independent value. With a good design, that should be limited to just the module that interfaces between your program and whatever it is that needs network-independent integers.

On big-endian systems that are already in network order, hton* is probably just a macro, so it's free. On little-endian systems, you're going to need to do it anyway, so checking if you need to do it is just slowing you down.

If this is insufficient, then you'll need to provide a better explanation of what you're trying to accomplish and why you need to know the endianness of the system at runtime.

Rudedog
A: 
  • There will be preprocessor macros available for testing wether it's big/little endian. e.g.
   #ifdef LITTLE_ENDIAN
   do it little endian way
   #else 
   do it big endian way
   #endif.

This is compile time, but the source for fat binaries gets compiled seperatly for each architecture , this is not a problem.

  • Im not sure if macosx has the betoh64() function in sys/endian.h - if it does - use that it'll do the right thing.
  • The last approach is to simply do the unpacking of the individual bytes in a way that's not sensible to the host endian - you only need to know the order the bytes are in from the source.

    uint64_t unpack64(uint8_t *src)
    {
       uint64_t val;
    
    
       val  = (uint64_t)src[0] << 56;
       val |= (uint64_t)src[1] << 48;
       val |= (uint64_t)src[2] << 40;
       val |= (uint64_t)src[3] << 32;
       val |= (uint64_t)src[4] << 24;
       val |= (uint64_t)src[5] << 16;
       val |= (uint64_t)src[6] <<  8;
       val |= (uint64_t)src[7]      ;
    
    
       return val;
    }
    
nos
A: 

You don't need to check the endianness at runtime. When you compile an application as universal binary, it is compiled multiple times with the appropriate defines and macros, EVEN if you are building on an Intel machine. At runtime, the mach-o loader will choose the best architecture to run from your universal binary (i.e. ppc on PowerPC or i386 on Intel).

Universal binary does not mean one binary for multiple architecture. It means one fat binary containing one binary for one architecture.

Please refer to http://developer.apple.com/legacy/mac/library/documentation/MacOSX/Conceptual/universal%5Fbinary/universal%5Fbinary%5Fintro/universal%5Fbinary%5Fintro.html for more details.

Laurent Etiemble