views:

530

answers:

2

const std::string::size_type cols = greeting.size() + pad * 2 + 2;

Why string::size_type? int is supposed to work! it holds numbers!!!

+13  A: 

A short holds numbers too. As does a signed char.

But none of those types are guaranteed to be large enough to represent the sizes of any strings.

string::size_type guarantees just that. It is a type that is big enough to represent the size of a string, no matter how big that string is.

For a simple example of why this is necessary, consider 64-bit platforms. An int is typically still 32 bit on those, but you have far more than 2^32 bytes of memory.

So if a (signed) int was used, you'd be unable to create strings larger than 2^31 characters. size_type will be a 64-bit value on those platforms however, so it can represent larger strings without a problem.

jalf
It's also the case on PowerPC and Cell. And, as far as I can recall, on Alpha as well. Plus, of course, I think x64 is the *typical* 64-bit CPU these days. ;)But you're right, it is obviously platform-dependant.
jalf
Which 64-bit Linux platform are we talking about here? On x64 machines, it still had 32-bit ints last I tried. And on Cell processors an int is also 32 bits. And by extension, I'm assuming the same to apply to Linux on PowerPC. So no, the Linux ABI varies from platform to platform, and most of the platforms I know of specify 4-bit ints, even on Linux.
jalf
But you're right. The hardware typically defines a common ABI that software *should* follow in order to allow interoperability. The OS defines an ABI which is usually identical, but might not be. And the compiler actually implements an ABI which again usually follows the OS, but doesn't strictly speaking have to.
jalf
gosh, you're right, removing my comments...
+1  A: 

A nested size_type typedef is a requirement for STL-compatible containers (which std::string happens to be), so generic code can choose the correct integer type to represent sizes.

There's no point in using it in application code, a size_t is completely ok (int is not, because it's signed, and you'll get signed/unsigned comparison warnings).