views:

377

answers:

5

How can I get the number of "1"s in the binary representation of a number without actually converting and counting ?

e.g.

  def number_of_ones(n):
      # do something
      # I want to MAKE this FASTER (computationally less complex).
      c = 0
      while n:
     c += n%2
     n /= 2
      return c


>>> number_of_ones(5)
    2
>>> number_of_ones(4)
    1
+4  A: 

I'm not a python programmer, so this is not python but hopefully it will be enough for you to follow.

c = 0
while (n > 0)
  c += 1
  n &= n-1

return c

While a little obscure, it's primary advantage is speed and simplicity. The while loop is only iterated once for every bit set to 1 in n.

Stephen Nutt
+1  A: 

If you're actually concerned about speed, code it up in C (see this question for how), and interface with the C implementation using something like ctypes.

Adam Rosenfield
I am concerned with computational complexity not with actual speed or language.
TheMachineCharmer
+5  A: 

IMO, a good approach would be to use a look-up table - create a dictionary which converts bytes to number of 1's (you can use the code you posted to generate it, it would only need to run once), and then use something like this:

def number_of_ones(n):
    sum = 0
    while n != 0:
        sum += lookup_table[n & 0xff]
        n >>= 8
    return sum

I believe this is a fairly good trade-off between space and running-time.

spatz
+1  A: 

If you want to do it in a single line, you could use:

sum( [x&(1<<i)>0 for i in range(32)] )
tom10
+5  A: 

You cannot make this computationally less complex. It will be O(n) the number of bits, or, as the answer with the & trick showed, O(n) the number of bits set to 1; but unless all of the numbers you are using are a special case, the latter should on average be n/2, so both of those O(n) numbers are the same.

And the lookup-table trick, of course, is actually doing nothing for the computational complexity; it's just paying for time with space but without changing the underlying economics, which are that you must examine each bit once somehow and there is no way around that. You cannot, logically, answer a question about the bits in the number without inspecting each of them.

Now, I suppose I'm being a bit sloppy since many of these examples are actually O(n^2) since in Python you have to examine the whole number at once time, so with a Python long integer of, say, 100 bytes, a + or an & or a / operation will look at each byte at least once, and that will happen over and over until the number is reduced to zero (in the schemes outlined above), so these, again, are really O(n^2) operations. I am not sure Python will allow a true O(n) solution here.

Anyway: if you were really asking about computational complexity, which specifically means big-O analysis, that's your answer. :-)

Brandon Craig Rhodes