Just noticed this on OSX and I found it curious as I expected long to be bigger than int. Is there any good reason for making them the same size?
int is supposed to be the natural word size of the architecture. In the old days, on 16 bit machines like the original IBM PC, ints were 16 bits and longs were 32 bits. On 32 bit machines like the 68000 series, ints were still "the natural word size", which was now 32 bits, and longs remained at 32 bits. Over time, longs grew to be 64 bits, and then we started using 64 bit architectures like the Intel Core 2, and so I expect int to grow to 64 bits sooner or later.
Interesting fact: On my laptop, with a Core 2 Duo and Mac OS X 10.5, int and long are both 32 bits. On my Linux box, also with a Core 2 Duo and Ubuntu, int is 32 bits and long is 64 bits.
Years ago, I was asked in a job interview where an int pointer would be after you added 3 to it. I answered "3 time sizeof(int) past where it is now". The interviewer pressed me, and I said it would depend on the architecture, since (at that time) Windows used 16 bit ints but since I was doing Unix programming I was more used to 32 bit ints. I didn't get the job - I suspect the interviewer didn't like the fact that I knew more than him.
You might want to read this: http://stackoverflow.com/questions/589575/c-size-of-int-long-etc
This is a result of the loose nature of size definitions in the C and C++ language specifications. I believe C has specific minimum sizes, but the only rule in C++ is this:
1 == sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
Moreover, sizeof(int)
and sizeof(long)
are not the same size on all platforms. Every 64-bit platform I've worked with has had long
fit the natural word size, so 32 bits on a 32-bit architecture, and 64 bits on a 64-bit architecture.
int
is essentially the most convenient and efficient integer typelong
is/was the largest integer typeshort
is the smallest integer type
If the longest integer type is also the most efficient, the int
is the same as long
. A while ago (think pre-32 bit), sizeof(int) == sizeof(short)
on a number of platforms since 16-bit was the widest natural integer.
int
and long
are not always the same size, so do not assume that they are in code. Historically there have been 8 bit and 16 bit, as well as the more familiar 32 bit and 64 bit architectures. For embedded systems smaller word sizes are still common. Search the net for ILP32 and LP64 for way too much info.
As Tom correctly pointed, the only standard size in C++ is char, whose size is 1(*). From there on, only a 'not smaller than' relation holds between types. Most people will claim that it depends on the architecture, but it is more of a compiler/OS decision. The same hardware running MacOSX, Windows (32/64 bits) or Linux (32/64) will have different sizes for the same data types. Different compilers in the same architecture and OS can have different sizes. Even the exact same compiler on the same OS on the same hardware can have different sizes depending on compilation flags:
$ cat test.cpp
#include <iostream>
int main()
{
std::cout << "sizeof(int): " << sizeof(int) << std::endl;
std::cout << "sizeof(long): " << sizeof(long) << std::endl;
}
$ g++ -o test32 test.cpp; ./test32
sizeof(int): 4
sizeof(long): 4
$ g++ -o test64 test.cpp -m64; ./test64
sizeof(int): 4
sizeof(long): 8
That is the result of using gcc compiler on MacOSX Leopard. As you can see the hardware and software is the same and yet sizes do differ on two executables born out of the same code.
If your code depends on sizes, then you are better off not using the default types but specific types for your compiler that make size explicit. Or using some portable libraries that offer that support, as an example with ACE: ACE_UINT64 will be an unsigned integer type of 64 bits, regardless of the compiler/os/architecture. The library will detect the compiler and environment and use the appropriate data type on each platform.
(*) I have rechecked the C++ standard 3.9.1: char size shall be 'large enough to store any member of the implementation's basic character set'. Later in: 5.3.3: sizeof(char), sizeof(signed char) and sizeof(unsigned char) are 1, so yes, size of a char is 1 byte.
After reading other answers I found one that states that bool is the smallest integer type. Again, the standard is loose in the requirements and only states that it can represent true and false but not it's size. The standard is explicit to that extent: 5.3.3, footnote: "sizeof(bool) is not required to be 1".
Note that some C++ implementations have decided to use bools larger than 1 byte for other reasons. In Apple MacOSX PPC systems with gcc, sizeof(bool)==4
.