tags:

views:

982

answers:

17

I'm looking for the most efficient way to calculate the minimum number of bytes needed to store an integer without losing precision.

e.g.

int: 10 = 1 byte
int: 257 = 2 bytes;
int: 18446744073709551615 (UINT64_MAX) = 8 bytes;

Thanks

P.S. This is for a hash functions which will be called many millions of times

Also the byte sizes don't have to be a power of two

The fastest solution seems to one based on tronics answer:

    int bytes;
    if (hash <= UINT32_MAX) 
    {
        if (hash < 16777216U)
        {
            if (hash <= UINT16_MAX)
            {
                if (hash <= UINT8_MAX) bytes = 1;
                else bytes = 2;
            }
            else bytes = 3;
        }
        else bytes = 4;
    } 
    else if (hash <= UINT64_MAX) 
    {
        if (hash < 72057594000000000ULL) 
        {
            if (hash < 281474976710656ULL) 
            {
                if (hash < 1099511627776ULL) bytes = 5;
                else bytes = 6;
            }
            else bytes = 7;
        }
        else bytes = 8;
    }

The speed difference using mostly 56 bit vals was minimal (but measurable) compared to Thomas Pornin answer. Also i didn't test the solution using __builtin_clzl which could be comparable.

+2  A: 

You need to raise 256 to successive powers until the result is larger than your value.

For example: (Tested in C#)

long long limit = 1;
int byteCount;

for (byteCount = 1; byteCount < 8; byteCount++) {
    limit *= 256;
    if (limit > value)
        break;
}

If you only want byte sizes to be powers of two (If you don't want 65,537 to return 3), replace byteCount++ with byteCount *= 2.

SLaks
shame to use a loop for a straightforward bit of arithmetic....
Mitch Wheat
@Mitch: `log` is not straightforward arithmetic, and he needs optimal performance.
SLaks
+13  A: 

You need just two simple ifs if you are interested on the common sizes only. Consider this (assuming that you actually have unsigned values):

if (val < 0x10000) {
    if (val < 0x100) // 8 bit
    else // 16 bit
} else {
    if (val < 0x100000000L) // 32 bit
    else // 64 bit
}

Should you need to test for other sizes, choosing a middle point and then doing nested tests will keep the number of tests very low in any case. However, in that case making the testing a recursive function might be a better option, to keep the code simple. A decent compiler will optimize away the recursive calls so that the resulting code is still just as fast.

Tronic
+1 for simple, fast solution
Xorlev
But what about when we have 256 bit integers!? ;)
Earlz
It won't run fast if you consider branch misprediction penalty
ZelluX
+1 for this answer. It may not be as pretty as using a log but it gets the job done much much faster.
Spencer Ruport
This won't be very useful if he needs an arbitrary number of bytes.
Peter Alexander
Nice one. A binary tree search would work nicely with this solution :)
Russell
+9  A: 

Assuming a byte is 8 bits, to represent an integer x you need [log2(x) / 8] + 1 bytes where [x] = floor(x).

Ok, I see now that the byte sizes aren't necessarily a power of two. Consider the byte sizes b. The formula is still [log2(x) / b] + 1.

Now, to calculate the log, either use lookup tables (best way speed-wise) or use binary search, which is also very fast for integers.

IVlad
This is a good way.
Justin Peel
Yes, but calculating the log will be very slow compared to other methods.
SLaks
log2 is a floating-point function and thus subject to nasty rounding errors. Also, I would estimate this solution being much slower than mine (but I didn't benchmark, so take that with a grain of salt).
Tronic
It's true, you shouldn't implement this as it is. This is just the formulas, you would probably want to calculate the log by binary search and bitwise operations or through a lookup table of some sort. Tronic: true, something like what you posted would be faster, it needs more conditions though.
IVlad
@Slaks: not necessarily: it may be held in efficient lookup tables...
Mitch Wheat
+4  A: 

This will get you the number of bytes. It's not strictly the most efficient, but unless you're programming a nanobot powered by the energy contained in a red blood cell, it won't matter.

int count = 0;
while (numbertotest > 0)
{
  numbertotest >>= 8;
  count++;
}
Ben Collins
The nanobot would disagree.
Geo
+1  A: 

For each of eight times, shift the int eight bits to the right and see if there are still 1-bits left. The number of times you shift before you stop is the number of bytes you need.

More succinctly, the minimum number of bytes you need is ceil(min_bits/8), where min_bits is the index (i+1) of the highest set bit.

John Feminella
+2  A: 

Floor((log2(N) / 8) + 1) bytes

Mitch Wheat
He needs performance, so he should avoid logs.
SLaks
@Slaks: not necessarily: it may be held in efficient lookup tables...
Mitch Wheat
+2  A: 

You need exactly the log function

nb_bytes = floor(log(x)/log(256))+1 if you use log2, log2(256) == 8 so

floor(log2(x)/8)+1

makapuf
He needs performance, so he should avoid logs.
SLaks
+4  A: 

Find the number of bits by taking the log2 of the number, then divide that by 8 to get the number of bytes.

You can find logn of x by the formula:

logn(x) = log(x) / log(n)

Update:

Since you need to do this really quickly, Bit Twiddling Hacks has several methods for quickly calculating log2(x). The look-up table approach seems like it would suit your needs.

Bill the Lizard
+20  A: 

Use this:

int n = 0;
while (x != 0) {
    x >>= 8;
    n ++;
}

This assumes that x contains your (positive) value.

Note that zero will be declared encodable as no byte at all. Also, most variable-size encodings need some length field or terminator to know where encoding stops in a file or stream (usually, when you encode an integer and mind about size, then there is more than one integer in your encoded object).

Thomas Pornin
+1 for bit shifting
Xorlev
+1 for a simple solution, even though this might not be quite as fast as possible (but it is very fast for small values, so overall it might be the fastest solution).
Tronic
@Tronic: whether this solution is faster than your dichotomic search depends on the patterns of input data. I think it would take a very specific setup to exhibit an actually measurable performance difference. My code has the slight advantage of automatically dealing with "longer types" (e.g. no need to change anything when newer C compilers with 128-bit types are developed).
Thomas Pornin
You could use `int n = 0; do { x >>= 8; n++; } while(x);` if you want 0 to return 1 byte instead.
Chris Lutz
+8  A: 

You may first get the highest bit set, which is the same as log2(N), and then get the bytes needed by ceil(log2(N) / 8).

Here are some bit hacks for getting the position of the highest bit set, which are copied from http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious, and you can click the URL for details of how these algorithms work.

Find the integer log base 2 of an integer with an 64-bit IEEE float

int v; // 32-bit integer to find the log base 2 of
int r; // result of log_2(v) goes here
union { unsigned int u[2]; double d; } t; // temp

t.u[__FLOAT_WORD_ORDER==LITTLE_ENDIAN] = 0x43300000;
t.u[__FLOAT_WORD_ORDER!=LITTLE_ENDIAN] = v;
t.d -= 4503599627370496.0;
r = (t.u[__FLOAT_WORD_ORDER==LITTLE_ENDIAN] >> 20) - 0x3FF;

Find the log base 2 of an integer with a lookup table

static const char LogTable256[256] = 
{
#define LT(n) n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n
    -1, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3,
    LT(4), LT(5), LT(5), LT(6), LT(6), LT(6), LT(6),
    LT(7), LT(7), LT(7), LT(7), LT(7), LT(7), LT(7), LT(7)
};

unsigned int v; // 32-bit word to find the log of
unsigned r;     // r will be lg(v)
register unsigned int t, tt; // temporaries

if (tt = v >> 16)
{
  r = (t = tt >> 8) ? 24 + LogTable256[t] : 16 + LogTable256[tt];
}
else 
{
  r = (t = v >> 8) ? 8 + LogTable256[t] : LogTable256[v];
}

Find the log base 2 of an N-bit integer in O(lg(N)) operations

unsigned int v;  // 32-bit value to find the log2 of 
const unsigned int b[] = {0x2, 0xC, 0xF0, 0xFF00, 0xFFFF0000};
const unsigned int S[] = {1, 2, 4, 8, 16};
int i;

register unsigned int r = 0; // result of log2(v) will go here
for (i = 4; i >= 0; i--) // unroll for speed...
{
  if (v & b[i])
  {
    v >>= S[i];
    r |= S[i];
  } 
}


// OR (IF YOUR CPU BRANCHES SLOWLY):

unsigned int v;          // 32-bit value to find the log2 of 
register unsigned int r; // result of log2(v) will go here
register unsigned int shift;

r =     (v > 0xFFFF) << 4; v >>= r;
shift = (v > 0xFF  ) << 3; v >>= shift; r |= shift;
shift = (v > 0xF   ) << 2; v >>= shift; r |= shift;
shift = (v > 0x3   ) << 1; v >>= shift; r |= shift;
                                        r |= (v >> 1);


// OR (IF YOU KNOW v IS A POWER OF 2):

unsigned int v;  // 32-bit value to find the log2 of 
static const unsigned int b[] = {0xAAAAAAAA, 0xCCCCCCCC, 0xF0F0F0F0, 
                                 0xFF00FF00, 0xFFFF0000};
register unsigned int r = (v & b[0]) != 0;
for (i = 4; i > 0; i--) // unroll for speed...
{
  r |= ((v & b[i]) != 0) << i;
}
ZelluX
+1 for interesting link
pythonic metaphor
+3  A: 

I think this is a portable implementation of the straightforward formula:

#include <limits.h>
#include <math.h>
#include <stdio.h>

int main(void) {
    int i;
    unsigned int values[] = {10, 257, 67898, 140000, INT_MAX, INT_MIN};

    for ( i = 0; i < sizeof(values)/sizeof(values[0]); ++i) {
        printf("%d needs %.0f bytes\n",
                values[i],
                1.0 + floor(log(values[i]) / (M_LN2 * CHAR_BIT))
              );
    }
    return 0;
}

Output:

10 needs 1 bytes
257 needs 2 bytes
67898 needs 3 bytes
140000 needs 3 bytes
2147483647 needs 4 bytes
-2147483648 needs 4 bytes

Whether and how much the lack of speed and the need to link floating point libraries depends on your needs.

Sinan Ünür
+2  A: 

There are a multitude of ways to do this.

Option #1.

 int numBytes = 0;
 do {
     numBytes++;
 } while (i >>= 8);
 return (numBytes);

In the above example, is the number you are testing, and generally works for any processor, any size of integer.

However, it might not be the fastest. Alternatively, you can try a series of if statements ...

For a 32 bit integers

if ((upper = (value >> 16)) == 0) {
    /* Bit in lower 16 bits may be set. */
    if ((high = (value >> 8)) == 0) {
        return (1);
    }
    return (2);
}

/* Bit in upper 16 bits is set */
if ((high = (upper >> 8)) == 0) {
    return (3);
}
return (4);

For 64 bit integers, Another level of if statements would be required.

If the speed of this routine is as critical as you say, it might be worthwhile to do this in assembler if you want it as a function call. That could allow you to avoid creating and destroying the stack frame, saving a few extra clock cycles if it is that critical.

Sparky
+3  A: 

The function to find the position of the first '1' bit from the most significant side (clz or bsr) is usually a simple CPU instruction (no need to mess with log2), so you could divide that by 8 to get the number of bytes needed. In gcc, there's __builtin_clz for this task:

#include <limits.h>
int bytes_needed(unsigned long long x) {
   int bits_needed = sizeof(x)*CHAR_BIT - __builtin_clzll(x);
   if (bits_needed == 0)
      return 1;
   else
      return (bits_needed + 7) / 8;
}

(On MSVC you would use the _BitScanReverse intrinsic.)

KennyTM
+3  A: 

You could write a little template meta-programming code to figure it out at compile time if you need it for array sizes:

template<unsigned long long N> struct NBytes
{ static const size_t value = NBytes<N/256>::value+1; };
template<> struct NBytes<0> 
{ static const size_t value = 0; };

int main()
{
    std::cout << "short = " << NBytes<SHRT_MAX>::value << " bytes\n";
    std::cout << "int = " << NBytes<INT_MAX>::value << " bytes\n";
    std::cout << "long long = " << NBytes<ULLONG_MAX>::value << " bytes\n";
    std::cout << "10 = " << NBytes<10>::value << " bytes\n";
    std::cout << "257 = " << NBytes<257>::value << " bytes\n";
    return 0;
}

output:

short = 2 bytes
int = 4 bytes
long long = 8 bytes
10 = 1 bytes
257 = 2 bytes

Note: I know this isn't answering the original question, but it answers a related question that people will be searching for when they land on this page.

Matt Price
Unfortunately it can't be done at compile time, but interesting solution. Thanks
Ben Reeves
+1  A: 

A bit basic, but since there will be a limited number of outputs, can you not pre-compute the breakpoints and use a case statement? No need for calculations at run-time, only a limited number of comparisons.

James
+1  A: 

Why not just use a 32-bit hash?


That will work at near-top-speed everywhere.

I'm rather confused as to why a large hash would even be wanted. If a 4-byte hash works, why not just use it always? Excepting cryptographic uses, who has hash tables with more then 232 buckets anyway?

DigitalRoss
+1  A: 

there are lots of great recipes for stuff like this over at Sean Anderson's "Bit Twiddling Hacks" page.

orion elenzil