views:

237

answers:

9

The stdint.h header lacks an int_fastest_t and uint_fastest_t to correspond with the {,u}int_fastX_t types. For instances where the width of the integer type does not matter, how does one pick the integer type that allows processing the greatest quantity of bits with the least penalty to performance? For example, if one was searching for the first set bit in a buffer using a naive approach, a loop such as this might be considered:

// return the bit offset of the first 1 bit
size_t find_first_bit_set(void const *const buf)
{
    uint_fastest_t const *p = buf; // use the fastest type for comparison to zero
    for (; *p == 0; ++p); // inc p while no bits are set
    // return offset of first bit set
    return (p - buf) * sizeof(*p) * CHAR_BIT + ffsX(*p) - 1;
}

Naturally, using char would result in more operations than int. But long long might result in more expensive operations than the overhead of using int on a 32 bit system and so on.

My current assumption is for the mainstream architectures, the use of long is the safest bet: It's 32 bit on 32 bit systems, and 64 bit on 64 bit systems.

+2  A: 

If you want to be certain you've got the fastest implementation, why not benchmark each one on the systems you're expecting to run on instead of trying to guess?

Jon Cage
Justify why if this can be benchmarked on a platform by platform basis, that it isn't in stdint.h with it's bedfellows, the {,u}int_leastX_t types which presumably have undergone the same checks?
Matt Joiner
Well it depends on what operation / algorithm you're trying to implement. Look at Jason Williams answer for example. Those types might get you pretty close to the fastest solution, but there's probably a few instances where there are faster ways to do things. Ultimately, the only way to be sure is to benchmark.
Jon Cage
A: 

I would guess that the types size_t (for an unsigned type) and ptrdiff_t (for a signed type) will usually correspond to quite efficient integer types on any given platform.

But nothing can prove that than inspecting the produced assembler and to do benchmarks.

Edit, including the different comments, here and in other replies:

size_t and ptrdiff_t are the only typedefs that are normative in C99 and for which one may make a reasonable assumption that they are related to the architecture.

There are 5 different possible ranks for standard integer types (char, short, int, long, long long). All the forces go towards having types of width 8, 16, 32, 64 and in near future 128. As a consequence int will be stuck on 32 bit. Its definition will have nothing to do with efficiency on the platform, but just be constrained by that width requirement.

Jens Gustedt
I suspect that size_t was picked to match the integer width required to address the maximum addressable virtual memory space. While it's likely the same as the native register width I don't think it was picked with performance in mind. Also a nitpick: ssize_t should be used for signed integers of this kind, ptrdiff_t should only be used for pointer subtraction :) (and {,u}intptr_t for general pointer manipulation :])
Matt Joiner
@Matt: `size_t` and `ptrdiff_t` are the only types that are guaranteed to exist by C99, the others are optional.
Jens Gustedt
@Jens Gustedt: Didn't know this, thanks.
Matt Joiner
+2  A: 

I'm not sure I really understand the question, but why aren't you just using int? Quoting from my (free draft copy of the wrong, i. e. C++) standard, "Plain ints have the natural size suggested by the architecture of the execution environment."

But I think that if you want to have the optimal integer type for a certain operation, it will be different depending on which operation it is. Trying to find the first bit in a large data buffer, or finding a number in a sequence of integers, or moving them around, could very well have completely different optimal types.

EDIT:

For whatever it's worth, I did a small benchmark. On my particular system (Intel i7 920 with Linux, gcc -O3) it turns out that long ints (64 bits) are quite a bit faster than plain ints (32 bits), on this particular example. I would have guessed the opposite.

Thomas Padron-McCarthy
`int` is not 64 bit on the very common LP64 x86_64 platform
Matt Joiner
@Matt: Yes. but will it automatically be faster if it's wider? I don't think so, but I haven't benchmarked anything.
Thomas Padron-McCarthy
Because there are only 5 different possible sizes for standard integer types (`char`, `short`, `int`, `long`, `long long`) and all the forces go towards having types of width 8, 16, 32, 64 and in near future 128, `int` will be stuck on 32 bit. Its definition will have nothing to do with efficiency on the platform, but just be constrained by that. On my machine I have that `int` is 32 bit and then that `int_fast32_t` is `long` which in turn is 64 bit.
Jens Gustedt
@Jens Gustedt: That throws out the possibility that `int` would be advanced when/if 32bit integers are no longer the fastest on x86, thanks.
Matt Joiner
+1  A: 

The answer is int itself. At least in C++, where 3.9.1/2 of the standard says:

Plain ints have the natural size suggested by the architecture of the execution environment

I expect the same is true for C, though I don't have any of the standards documents.

j_random_hacker
Thanks for the standard quote, but how do you explain LP64 x86_64?
Matt Joiner
@Matt: I believe that 64-bit arithmetic on Intel x86_64 CPUs is slower than 32-bit arithmetic. My recollection is that AMD came out with a 64-bit version of the x86 general-purpose instruction set first, with 64-bit arithmetic that may or may not have been as fast as the 32-bit, and Intel had to catch up fast, so its 64-bit ops are internally implemented on the same 32-bit-wide data path as before and thus take twice as long as plain 32-bit ops. Can anyone confirm this?
j_random_hacker
This is definitely the best answer if we can get this confirmed.
Matt Joiner
@Matt: Might be worth searching SO questions for "64-bit" or similar... That's probably where I got it from. Sorry I don't have the time to do this myself right now.
j_random_hacker
+3  A: 

Theoretically, int is the best bet. It should map to the CPU's native register size, and thus be "optimal" in the sense you're asking about.

However, you may still find that an int-64 or int-128 is faster on some CPUs than an int-32, because although these are larger than the register size, they will reduce the number of iterations of your loop, and thus may work out more efficient by minimising the loop overheads and/or taking advantage of DMA to load/store the data faster.

(For example, on ARM-2 processors it took 4 memory cycles to load one 32-bit register, but only 5 cycles to load two sequentially, and 7 cycles to load 4 sequentially. The routine you suggest above would be optimised to use as many registers as you could free up (8 to 10 usually), and could therefore run up to 3 or 4 times faster by using multiple registers per loop iteration)

The only way to be sure is to write several routines and then profile them on the specific target machine to find out which produces the best performance.

Jason Williams
Except for the fact that I don't think `int` maps to the native register size on x86_64 (is this a property of LP64 traditions only and not the case on say ARM?), you make a good point!
Matt Joiner
@Matt: Yes, that's why I covered myself by starting with "theoretically" :-) The size of 'int' is ultimately compiler-defined, so this type of optimisation really has to be determined for every compiler on every platform.
Jason Williams
+3  A: 

int_fast8_t is always the fastest integer type in a correct implementation. There can never be integer types smaller than 8 bits (because CHAR_BIT>=8 is required), and since int_fast8_t is the fastest integer type with at least 8 bits, it's thus the fastest integer type, period.

R..
Theoretically this answer is correct (very impressive reasoning), but on my system: `typedef unsigned char uint_fast8_t;`, which I'm sure is not going to be the fastest.
Matt Joiner
On x86, all types except 16-bit (which is slow) are equally fast.
R..
Actually, a slight correction: for a single variable, they're all equally fast, but if you're using a large number of integer variables (especially an array), smaller types will of course be faster because of reduced impact on the cache. Thus, in a sense, `char` is the fastest type on x86.
R..
@R..: Well I won't say you're wrong without testing it first, but it sounds somewhat unexpected. Keep in mind that loop overhead is considered *against* any speedup that smaller types might give. Total throughput is being considered here.
Matt Joiner
Oh, well it seems I missed what you're trying to do, but your question was also misleading in saying you don't care about the size. You **do** care about the size because it affects how many times you have to loop. In that case, I would recommend just using `size_t` or `uintptr_t`, which are almost surely the system word size (`int` might not be).
R..
A: 

If you're compiling with gcc, i'd recommend using __builtin_ffs() for finding the first bit set:

Built-in Function: int __builtin_ffs (unsigned int x) Returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.

This will be compiled into (often a single) native assembly instruction.

srparish
The example was only for illustration. I used ffsX to denote non-specific width ffs implementation.
Matt Joiner
A: 

It is not possible to answer this question since the question is incomplete. As an analogy, consider the question:

What is the fastest vehicle

A Bugatti Veyron? Certainly fast, but no good for going from London to New York.

What is missing from the question, is the context the integer will be used in. In the original example above, I doubt you'd see much difference between 8, 32 or 64 bit values if the array is large and sparse since you'll be hitting memory bandwidth limits before cpu limits.

The main point is, the architecture does not define what size the various integer types are, it's the compiler designer that does that. The designer will carefully weigh up the pros and cons for various sizes for each type for a given architecture and pick the most appropriate.

I guess the 32 bit int on the 64 bit system was chosen because for most operations ints are used for 32 bits are enough. Since memory bandwidth is a limiting factor, saving on memory use was probably the overriding factor.

Skizz
Sure, the total throughput might be the same, but I need those cycles for other things.
Matt Joiner
@Matt: `I need those cycles for other things` - hmmm, unless you're hand coding the assembler to interleave several tasks at the instruction level, those cycles are going to be wasted, the OS doesn't task switch at that frequency/granularity.
Skizz
A: 

For all existing mainstream architectures long is the fastest type at present for loop throughput.

Matt Joiner
The fastest standard type, maybe. Most mainstream architectures with multimedia support have vector extensions.
Potatoswatter