views:

59

answers:

2

This is what I get for pampering myself with high-level programming languages.


I have a function which writes a 32-bit value to a buffer, and a uint64_t on the stack. Is the following code a sane way to store it?

uint64_t size = 0;
// ...
getBytes((uint32_t*)&size+0x1);

I'm assuming that this would be the canonical, safe style:

uint64_t size = 0;
// ...
uint32_t smallSize;
getBytes(&smallSize);
size = smallSize;
+3  A: 

No. It works correctly only on big-endian machines. And assuming a particular byte order - without even checking it first - is not sane.

Even if you are sure that your program runs only on big-endian machines right now, you'll never know whether it might have to run on a little-endian machine in the future. (I'm writing this on a computer made by a company which used big-endian processors for decades, then switched to little-endian processors a couple years ago, and is now also quite successful with bi-endian processors in certain devices ;-))

oefe
Heh, I'm actually writing this code for just that company's machines — and luckily-enough it's on a code path that's only taken on the big-endian variety. I'll take the advice, though.
Sidnicious
A: 

Why not make getBytes() to return uint64_t? And use the argument (e.g. int *) to return error code if any.

From my personal experience, if you really want to unify the two code paths, then use uint64_t in both.

Also note that "(uint32_t*)&size" breaks C99 strict aliasing rules (and e.g. in GCC one would have to disable the optimization).

Dummy00001
Thanks, but `getBytes` is a placeholder for a library function that's not under my control.
Sidnicious