tags:

views:

176

answers:

3

I managed to get a unsigned long int octets-representation (BE) by reading IPv4 methods, and I managed to read about how signed integers are using the MSB as the sign indicator, which makes 00 00 00 00 to be 0, while 7F FF FF FF is 2147483647.

But I can't manage how to do the same for signed long integers?

#include <stdio.h>
#include <string.h>

int main (void)
{
    unsigned long int intu32;
    unsigned char octets[4];

    intu32 = 255;

    octets[3] = (intu32) & 255;
    octets[2] = (intu32 >> 8) & 255;
    octets[1] = (intu32 >> 16) & 255;
    octets[0] = (intu32 >> 24) & 255;
    printf("(%d)(%d)(%d)(%d)\n", octets[0], octets[1], octets[2], octets[3]);

    intu32 = (octets[0] << 24) | (octets[1] << 16) | (octets[2] << 8) | octets[3];
    printf("intu32:%lu\n", intu32);

    return 0;
}

Thanks in advance, Doori bar

+2  A: 

There is no difference. You can always serialize/deserialize signed integers as if they are unsigned, the difference is only in the interpretation of the bits, not in the bits themselves.

Of course, this only holds true if you know that the unsigned and signed integers are of the same size, so that no bits get lost.

Also, you need to be careful (as you are) that no intermediary stage does any unplanned sign-extension or the like, the use of unsigned char for individual bytes is a good idea.

unwind
Very appreciate your answer. I guess I been reading too much regards the difference of signed vs unsigned, and the idea of using 1 bit less which represents the sign, which made me believe there is something further to tweak for keeping the signed representation.So I'll use the exact same method for both unsigned and signed.Doori Bar
Doori Bar
A: 

You are probably confused that it is common practise (and applied in ix86 processors) to encode negative values using twos complement encoding. This means that the hex notation of 4 byte -1 is 0xffffffff. The reason this encoding is used is that by taking into account automatic overflow adding 2 0x00000002 to -1 will yield the correct result (0x00000001).

Paul de Vrieze
Thanks for the explanation, I guess we can agree that I got confused by the whole subject :)
Doori Bar
A: 

Do you want something like this? It would be helpul (as Vicki asked) if you could provide what you have and what you want to get.

#include <stdio.h>
#include <string.h>

int main (void)
{
    union{
        long int intu32;
        char octets[4];
    } u;
    u.intu32 = 255;

    printf("(%d)(%d)(%d)(%d)\n", (int) u.octets[3], (int) u.octets[2], (int) u.octets[1], (int) u.octets[0]);

    printf("intu32:%lu\n", u.intu32);

    return 0;
}
philcolbourn
As unwind explained, it seems the sample I gave was perfectly valid for both signed and unsigned 32bit integers. Regards your sample - isn't it being affected by the endianness of the machine? (while my sample won't be affected if the data will be stored under a LE machine, and be extracted by a BE machine?)
Doori Bar
Perhaps. For integers then you are right (but I have no personal experience). But it would also depend on how the IP address was packed into the intu32 value in the first place. If the IP address from the IP frame was simply accessed by an integer in a structure, then I think the union is the portable way to process it. BTW, You probably noticed that I rotated the octet indices.
philcolbourn
This is a good article about the issue.http://www.ibm.com/developerworks/aix/library/au-endianc/index.html?ca=drs-
philcolbourn