views:

250

answers:

7

Variables of type int are allegedly "one machine-type word in length" but in embedded systems, C compilers for 8 bit micro use to have int of 16 bits!, (8 bits for unsigned char) then for more bits, int behave normally: in 16 bit micros int is 16 bits too, and in 32 bit micros int is 32 bits, etc..

So, is there a standar way to test it, something as BITSIZEOF( int ) ?

like "sizeof" is for bytes but for bits.

this was my first idea

    register c=1;                
    int bitwidth=0;
    do
    {

        bitwidth++;

    }while(c<<=1);

    printf("Register bit width is : %d",bitwidth);

But it takes c as int, and it's common in 8 bit compilers to use int as 16 bit, so it gives me 16 as result, It seems there is no standar for use "int" as "register width", (or it's not respected)

Why I want to detect it? suppose I need many variables that need less than 256 values, so they can be 8, 16, 32 bits, but using the right size (same as memory and registers) will speed up things and save memory, and if this can't be decided in code, I have to re-write the function for every architecture

EDIT After read the answers I found this good article

http://embeddedgurus.com/stack-overflow/category/efficient-cc/page/4/

I will quote the conclusion (added bold)

Thus the bottom line is this. If you want to start writing efficient, portable embedded code, the first step you should take is start using the C99 data types ‘least’ and ‘fast’. If your compiler isn’t C99 compliant then complain until it is – or change vendors. If you make this change I think you’ll be pleasantly surprised at the improvements in code size and speed that you’ll achieve.

+1  A: 

The ISA you're compiling for is already known to the compiler when it runs over your code, so your best bet is to detect it at compile time. Depending on your environment, you could use everything from autoconf/automake style stuff to lower level #ifdef's to tune your code to the specific architecture it'll run on.

Luke404
Way to complicated. Use `sizeof(int) * CHAR_BIT` as proposed by Paul and Andrey.
sbi
+8  A: 
#include <limits.h>

const int bitwidth = sizeof(int) * CHAR_BIT;
Paul R
+1  A: 

I don't exactly understand what you mean by "there is no standar for use "int" as "register width". In the original C language specification (C89/90) the type int is implied in certain contexts when no explicit type is supplied. Your register c is equivalent to register int c and that is perfectly standard in C89/90. Note also that C language specification requires type int to support at least -32767...+32767 range, meaning that on any platform int will have at least 16 value-forming bits.

As for the bit width... sizeof(int) * CHAR_BIT will give you the number of bits in the object representation of type int.

Theoretically though, the value representation of type int is not guaranteed to use all bits of its object representation. If you need to determine the number of bits used for value representation, you can simply analyze the INT_MIN and INT_MAX values.

P.S. Looking at the title of your question, I suspect that what you really need is just the CHAR_BIT value.

AndreyT
The problem is that in a 16 or 8 bit architecture, CHAR_BIT, INT_MIN and INT_MAX have same value, so I can't use it for detection, at least in most common microcontroler compilers. I think standard for int, start from 16 bits and up
Hernán Eche
@Hernán: Are you saying that on these architectures `CHAR_BIT==INT_MIN==INT_MAX`? That's ridiculous! The standard-conforming way to determine the number of bits for any type `T` is `sizeof(T)*CHAR_BIT`.
sbi
@sbi, they have same value in this sense, CHAR_BIT (8 bits) = CHAR_BIT (16 bits), INT_MIN (8 bits) = INT_MIN(16 bits) etc..
Hernán Eche
@Hernán: Well, then `char` has the same size on both of these architectures, and so has `int`. What's your problem then?
sbi
@sbi, I have to detect the architecture to know if use char or int, I would like to have a -via preprocessor selected type- that defines me a 8 bit var when have 8 bit registers, and 16 bit when have 16 bit registers, this type is exactly "int" for 16,32,64.. but in 8 bit I have the problem that int is 16 bit wide..
Hernán Eche
I don't think the C standard even requires UINT_MAX to be a power of two. I think one could have a standards-conforming C compiler target a machine where every number was stored in ten ten-state counting tubes. Computation of boolean operators on such a machine would be very slow (they'd have to perform repeated divmod-2 operations) but that wouldn't make it non-standards-conformant.
supercat
@supercat: I believe it does require `UINT_MAX` to be a power of 2 minus 1. Unsigned arithmetic in C is required to be modulo 2^N, where N is the number of value-forming bits in the unsigned type. That immediately means that maximum value of any unsigned type should be 2^N-1. Otherwise, you are absolutely right. A valid C implementation can be built on top of ternary or decimal hardware.
AndreyT
@Hernán: I see. Sorry for being so dense, but I must have mis-parsed your question. Well, in the end, what you want would have been easy to do using C++' template-meta stuff, but since your stuck in C, I think `<stdint.h>` is what you need.
sbi
A: 

Does an unsigned char or unsigned short suit your needs? Why not use that? If not, you should be using compile time flags to bring in the appropriate code.

MikeyB
There is overhead when converting to/from default machine word size, so using variable as small as posible is not the best way to go
Hernán Eche
+7  A: 

To answer your deeper question more directly, if you have a need for very specific storage sizes that are portable across platforms, you should use something like types.h stdint.h which defines storage types specified with number of bits.

For example, uint32_t is always unsigned 32 bits and int8_t is always signed 8 bits.

Amardeep
+1 this is a solution (anyway I still can't detect width of hardware)
Hernán Eche
Isn't it that `unint32_t` is _at least_ 32bit? ICBWT.
sbi
@sbi: They are called exact width integers in the man page. But I referred to the incorrect header file. It isn't types.h, but stdint.h. I'm correcting the answer.
Amardeep
+15  A: 

I have to re-write the function for every architecture

No you don't. Use C99's stdint.h, which has types like uint_fast8_t, which will be a type capable of holding 256 values, and quickly.

Then, no matter the platform, the types will change accordingly and you don't change anything in your code. If your platform has no set of these defined, you can add your own.

Far better than rewriting every function.

GMan
+1 this is a solution and more standard I think (anyway I still can't detect width of hardware)
Hernán Eche
@Hernan: Not surprising. C doesn't care about the implementation, and width is the kind of thing that would need to be exposed with some OS-specific API.
GMan
Codewarrior compiler seems not to be C99 compliant, I will follow embeddedgurus.com advice, complain or change vendor =P
Hernán Eche
Once I started using `stdint.h` I have never looked back. This eases portability more than anything.
Gerhard
@Gerhard, I really wish it was standard in C++, though. =]
strager
A: 

I think that in this case you don't need to know how many bits has your architecture. Just use variables as small as possible if you want to optimize your code.

mack369
Given the last paragraph of the question, I think this is the way to go. If all your values are less than 256, use a `uint8_t`. When your CPU copies it into a register, it will sign-extend it as needed to fill the rest of the register. You should see no performance penalty for using numbers that are smaller than the register size. Your best bet is to use the smallest data type from `stdint.h` that can represent all of your values.
bta
perhaps, but I am thinking unsigned char is the most portable answer, just because stdint.h is from C99, and not in previous
Hernán Eche