views:

45

answers:

2

I'm a fairly new programmer and I was wondering if someone could give me a practical explanation/example on the differences and uses between working with signed, unsigned and 32 bit vs 64 bit?

i.e. I read an article about how Twitter had developers switch to 64 bit last year but I wasn't sure the reasoning and the specific nature to it.

Thank you!

+2  A: 

For n bits, you can have 2^n different numbers represented by those bits. So 32 bit unsigned numbers go from 0 to 4,294,967,295 (2^32-1, the -1 is because 0 is a valid number). Signed numbers divide that 4 billion evenly between positive and negative. 32-bit computers use this in their memory addresses, which means a program can natively access 4 GB of memory. 64-bit computers have a limit of 2^64, which is much, much higher.

You also run across that 4 billion limit if you're using 32-bit numbers to represent other things, like for example users, tweets, or seconds since a certain date. So 32-bit works just fine up to a certain scale, then above that, even though there are ways to work around the limit, it makes more sense to go to 64-bit.

The disadvantage is it takes twice as much memory to store your numbers.

Karl Bielefeldt
+1  A: 

On a subject of the mixed 32/64 arithmetics it is possible to learn a lot of interesting here: A Collection of Examples of 64-bit Errors in Real Programs.

Andrey Karpov