tags:

views:

283

answers:

5

I have two tasks for an assignment, one return the number of bits in type int on any machine. I thought I would write my function like so:

int CountIntBitsF() {
    int x = sizeof(int) / 8;
    return x;
}

Does that look right?

The second part is to return the number of any bits of any data type with a macro, and the macro can be taken from limits.h. I looked up limits.h on my machine, and also http://www.opengroup.org/onlinepubs/007908799/xsh/limits.h.html, but I don't think I really understand how any of those would return the number of bits in any data type. Any thoughts? Thanks.

+5  A: 

It's *, not /.

As for the second part, see the "Numerical Limits" section.

Ignacio Vazquez-Abrams
+8  A: 

The fundamental unit of storage is a char. It is not always 8 bits wide. CHAR_BIT is defined in limits.h and has the number of bits in a char.

Justin Smith
+2  A: 

In limits.h, UINT_MAX is the maximum value for an object of type unsigned int. Which means it is an int with all bits set to 1. So, counting the number of bits in an int:

#include <limits.h>

int intBits () {
    int x = INT_MAX;
    int count = 2; /* start from 1 + 1 because we assume
                    * that sign uses a single bit, which
                    * is a fairly reasonable assumption
                    */

    /* Keep shifting bits to the right until none is left.
     * We use divide instead of >> here since I personally
     * know some compilers which does not shift in zero as
     * the topmost bit
     */
    while (x = x/2) count++;

    return count;
}
slebetman
Such compilers are violating the standard, fwiw.
Roger Pate
@slebetman: you might be thinking about shifting *signed* values. For unsigned types, shifting is well-defined.
Alok
I think there's some confusion here whether x should be unsigned or signed. The question asks about `int`, in which case the comment about shift would be justified, but for some reason this answer is about unsigned int.
Steve Jessop
@Steve: Signed int and unsigned int have the same number of bits. It's just INT_MAX has 1 bit fewer than UINT_MAX. That's why I used unsigned.
slebetman
I think this is what my prof is looking for since in his hints he talks about using a 1 and left shifted it and keeping count. Where are documentation like UINT_MAX are filled with all 1s? Or is that tribal knowledge after enough time with the language?
Crystal
UINT_MAX *has* to be filled with 1s, since if there were any 0s it wouldn't represent the maximum unsigned value.
Ignacio Vazquez-Abrams
@Roger: I've used and at times am still forced to use compilers for embedded systems that violate all kinds of standards. One of the compilers I use implements an extension to C that shifts in the carry bit from the accumulator when using the >> operator even for unsigned int. This is partly because the assembly instruction behaves that way and they can implement the >> operator in a single instruction if they violate the standard.
slebetman
One must remember that source code is turned into machine language by compilers, rather than by the documents specifying them.
Crashworks
@slebetman: technically that's not an extension to the standard, it's just a violation, and the compiler in that mode is therefore not a C compiler, it's a compiler of some other language very similar to C. Otherwise, Java is an "extension" of the C standard, by adding and removing rules from C until you end up with Java ;-). An extension to the C standard is when you take something which would not be legal C, and define what it does in your implementation. It doesn't affect legal C.
Steve Jessop
@slebetman: "Signed int and unsigned int have the same number of bits". I can't find that in the standard, do you know where it's stated? What rule do I break if in my implementation sizeof(unsigned int) == 4, UINT_MAX == 0xFFFFFFFF, sizeof(int) == 4, INT_MAX == 0x3FFFFFFF, and int has a padding bit for no good reason that I can think of other than lulz?
Steve Jessop
@Steve: Hmm. You're right. But that means, strictly speaking, we can't really get the number of bits in an int since you can also legally implement -1 as 0x80000000.
slebetman
slebetman: You don't need to touch negatives (shifting them is implementation-defined anyway), just start at INT_MAX and shift until you hit zero, that's the number of value bits. Since it's signed, it has one sign bit, and `sizeof(int)*CHAR_BIT - value_bits - 1` gives you the number of padding bits.
Roger Pate
@Roger: Ah good point. But it does still make the assumption that the sign bit is only one bit. Which is not mandated by any standard. You're still legally allowed to implement sign bit as two bits. But I think this is a better assumption than assuming that uint is one bit more than int. So code fixed.
slebetman
@slebetman: "For signed integer types .. there shall be exactly one sign bit." 6.2.6.2/2 in C99.
Roger Pate
A: 

Are you sure you want number of bits, not number of bytes? In C, for a given type T, you can find the number of bytes it takes by using the sizeof operator. The number of bits in a byte is CHAR_BIT, which usually is 8, but can be different.

So, given a type T, the number of bits in an object of type T is:

#include <limits.h>
size_t nbits = sizeof(T) * CHAR_BIT

Note that, except for unsigned char type, all possible combinations of nbits bits above may not represent a valid value of type T.

For the second part, note that you can apply sizeof operator to an object as well as a type. In other words, given a type T and an object x of such type:

T x;

You can find the size of T by sizeof(T), and the size of x by sizeof x. The parentheses are optional if sizeof is used for an object.

Given the information above, you should be able to answer your second question. Ask again if you still have issues.

Alok
+2  A: 

If you want the number of bits used to store an int in memory, use Justin's answer, sizeof(int)*CHAR_BIT. If you want to know the number of bits used in the value, use slebetman's answer.

Although to get the bits in an INT, you should probably use INT_MAX rather than UINT_MAX. I can't remember whether C99 actually guarantees that int and unsigned int are the same width, or just that they're the same storage size. I suspect only the latter, since in 6.2.6.2 we have "if there are M value bits in the signed type and N in the unsigned type, then M <= N", not "M = N or M = N-1".

In practice, integral types don't have padding bits in any implementation I've used, so you most likely get the same answer for all, +/- 1 for the sign bit.

Steve Jessop
Another quote from C99 draft (6.2.5.6): For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword `unsigned`) that uses the same amount of storage (including sign information) and has the same alignment requirements.
Alok
Thanks. And note "same amount of storage", not saying "same number of non-padding bits".
Steve Jessop
Why do we need to know the difference between an int in memory vs the bits used in the value. Is the bits used in the value more important if you were doing some sort of hardware programming where you needed to know the number of bits used to represent certain register values or something along those lines?
Crystal
Yes, there are many important reasons to care about the bitwidth of an int. For one thing, it determines how large a value may be stored (ie, a signed 16 bit int can store -32768..32767 while an unsigned 16bit int can store 0..65535 and so on). It's also significant if you need to serialize your data, ie for saving to a file or transmitting across the network.
Crashworks