The following fragment returns the next highest power of two for an (assumed unsigned) integer of type T. It does this by shifting the bits repeatedly.
For all intents and purposes the unsigned type i used in the bit shifting loop is sufficiently large to represent a (nominally) 65536 bit number. Practically therefore it's fine to leave it 'as is'.
template <class T>
T supremum_2(T k) {
if (k == T(0)) return T(1);
k--;
for (int i=1; i<sizeof(T)*8; i++) k |= k >> i;
return k+1;
}
To do a professional job, the type of the loop counter should be chosen at compile time so that it guarantees to be able to span up to sizeof(T)*8 without overflow.
Can this be done at compile time using std::numeric_traits? If so how?
Conceptually I would like to be able to write something like:
typedef unsigned_type_that_can_represent<sizeof(T)*8> counter_type;
...
...
for (counter_type i(1); i<sizeof(T)*8; i<<=1) k = k | k >> i;
...
Based on the discussions below I have decided to add the following context.
Put another way:
How can we guarantee to select efficient (only as big as they need to be) and suitable types for the internal workings of template code at compile time? If we find ourselves using concrete types in template code we may be making inadvertent assumptions about the types of the templates through a potentially opaque route.
For example, if we were to stick with (say) an int for the counter, all will work fine until someone uses the template code with their bigint library. This could represent integers with more binary digits than can be represented by an int. Should we therefore make the type unsigned long long? Surely that just delays the problem (albeit for a long time)? There are shades of "640K - big enough for everybody" or static array sizes about this solution.
The obvious, but somewhat inefficient, choice in this instance is to set the type of the counter to be the same as the type of the number k. It is (in principle) inefficient since we only demand that the counter be able to represent a number corresponding to the number of bits of k. It may be that for other situations this is the wrong thing to assume.
What about the general case? It looks as though meta-programming is an appropriate approach. How to keep this 'sane'? Perhaps, formally, the requirement is for a compile-time function to map (potentially derived) abstract type requirements on to types.
Perhaps it's a job for YABL (Yet Another Boost Library)!
[Apologies for rambling]