tags:

views:

432

answers:

6

How can I convert a integer to its bit representation. I want to take an integer and return a vector that has contains 1's and 0's of the integer's bit representation.

I'm having a heck of a time trying to do this myself so I thought I would ask to see if there was a built in library function that could help.

Thanks!

Edit: Excellent info! Thanks guys!

+3  A: 

Doesn't work with negatives.

vector<int> convert(int x) {
  vector<int> ret;
  while(x) {
    if (x&1)
      ret.push_back(1);
    else
      ret.push_back(0);
    x>>=1;  
  }
  reverse(ret.begin(),ret.end());
  return ret;
}
dcp
Many thanks! :)I can now implement the algorithm the way I originally intended. :D
bobber205
Or `do ret.push_back( x ` — this version returns a zero bit for zero input.
Potatoswatter
A: 

The world's worst integer to bit as bytes converter:

#include <algorithm>
#include <functional>
#include <iterator>
#include <stdlib.h>

class zero_ascii_iterator: public std::iterator<std::input_iterator_tag, char>
{
public:
    zero_ascii_iterator &operator++()
    {
        return *this;
    }

    char operator *() const
    {
        return '0';
    }
};


char bits[33];

_itoa(value, bits, 2);
std::transform(
    bits, 
    bits + strlen(bits), 
    zero_ascii_iterator(), 
    bits, 
    std::minus<char>());
MSN
Wow. I wonder why Perl got the reputation for being incomprehensible =)
maerics
Definitely deserves an exclusive space @ codinghorror.
jweyrich
This is an example from real life?
Potatoswatter
I hope not. I wanted to write this without using boost::bind(...).
MSN
A: 

Here is a version that works with negative numbers:

string get_bits(unsigned int x)
{
  string ret;
  for (unsigned int mask=0x80000000; mask; mask>>=1) {
    ret += (x & mask) ? "1" : "0";
  }
  return ret;
}

The string can, of course, be replaced by a vector or indexed for bit values.

maerics
+2  A: 

A modification of DCP's answer. The behavior is implementation defined for negative values of t. It provides all bits, even the leading zeros. Standard caveats related to the use of std::vector<bool> and it not being a proper container.

#include <vector>    //for std::vector
#include <algorithm> //for std::reverse
#include <climits>   //for CHAR_BIT

template<typename T>
std::vector<bool> convert(T t) {
  std::vector<bool> ret;
  for(unsigned int i = 0; i < sizeof(T) * CHAR_BIT; ++i, t >>= 1)
    ret.push_back(t & 1);
  std::reverse(ret.begin(), ret.end());
  return ret;
}

And a version that [might] work with floating point values as well. And possibly other POD types. I haven't really tested this at all. It might work better for negative values, or it might work worse. I haven't put much thought into it.

template<typename T>
std::vector<bool> convert(T t) {
  union {
    T obj;
    unsigned char bytes[sizeof(T)];
  } uT;
  uT.obj = t;

  std::vector<bool> ret;
  for(int i = sizeof(T)-1; i >= 0; --i) 
    for(unsigned int j = 0; j < CHAR_BIT; ++j, uT.bytes[i] >>= 1)
      ret.push_back(uT.bytes[i] & 1);
  std::reverse(ret.begin(), ret.end());
  return ret;
}
Dennis Zickefoose
Endianess probably pops up in that second one, huh? Oh well.
Dennis Zickefoose
A: 

Returns a string instead of a vector, but can be easily changed.

template<typename T>
std::string get_bits(T value) {
    int size = sizeof(value) * CHAR_BIT;
    std::string ret;
    ret.reserve(size);
    for (int i = size-1; i >= 0; --i)
        ret += (value & (1 << i)) == 0 ? '0' : '1';
    return ret;
}
jweyrich
+1  A: 

It's not too hard to solve with a one-liner, but there is actually a standard-library solution.

#include <bitset>
#include <algorithm>

std::vector< int > get_bits( unsigned long x ) {
    std::string chars( std::bitset< sizeof(long) * CHAR_BIT >( x )
        .to_string< char, std::char_traits<char>, std::allocator<char> >() );
    std::transform( chars.begin(), chars.end(),
        std::bind2nd( std::minus<char>(), '0' ) );
    return std::vector< int >( chars.begin(), chars.end() );
}

C++0x even makes it easier!

#include <bitset>

std::vector< int > get_bits( unsigned long x ) {
    std::string chars( std::bitset< sizeof(long) * CHAR_BIT >( x )
        .to_string( char(0), char(1) ) );
    return std::vector< int >( chars.begin(), chars.end() );
}

This is one of the more bizarre corners of the library. Perhaps really what they were driving at was serialization.

cout << bitset< 8 >( x ) << endl; // print 8 low-order bits of x
Potatoswatter