views:

151

answers:

5

Hi there

Im working on a sparetime project, making some server code to an Arduino Duemilanove, but before I test this code on the controller I am testing it on my own machine (An OS X based macbook). I am using ints some places, and I am worried that this will bring up strange errors when code is compiled and run on the Arduino Duemilanove because the Arduino handles ints as 2 bytes, and my macbook handles ints as 4 bytes. Im not a hardcore C and C++ programmer, so I am in a bit of worry how an experienced programmer would handle this situation. Should I restrict the code with a typedef that wrap my own definition of and int that is restricted to 2 bytes? Or is there another way around?

A: 

avoid using the type int as it's size can depend upon architecture / compiler.

use short and long instead

Tree77
Is that defined somewhere, that short and long is system independend? Because that sounds like a very good solution to my problem.
mslot
@mslot - no it's not. There are some constraints, but you shouldn't rely on them. Where you target specific hardware you should use appropriate fixed-size types as mentioned by bdonlan and phooky. Use a typedef as well, and you can target other hardware easily.
martin clayton
@mslot ues short is defined to be no longer than long, and long is defined to be no shorter than short - other than that you are on your own
Martin Beckett
+5  A: 

The C standard defines an int as being a signed type large enough to at least hold all integers between -32768 and 32767 - implementations are free to choose larger types, and any modern 32-bit system will choose a 32-bit integer. However, as you've seen, some embedded platforms still use 16-bit ints. I would recommend using uint16_t or uint32_t if your arduino compiler supports it; if not, use preprocessor macros to typedef those types yourself.

bdonlan
damn, you beat me
delnan
+7  A: 

Your best bet is to use the stdint.h header. It defines typedefs that explicitly refer to the signedness and size of your variables. For example, a 16-bit unsigned integer is a uint16_t. It's part of the C99 standard, so it's available pretty much everywhere. See:

http://en.wikipedia.org/wiki/Stdint.h

phooky
I liked this answer because you gave me the link, and explained me some about the definitions.
mslot
+2  A: 

Will you need values smaller than −32,768 or bigger than +32,767? If not, ignore the differnt sizes. If you do need them, there's stdint.h with fixed-size integers, singned und unsigned, called intN_t/uintN_t (N = number of bits). It's C99, but most compilers will support it. Note that using integers with a size bigger than the CPU's wordsize (16 bit in this case) will hurt performance, as there are no native instructions for handling them.

delnan
I have thought about just ignoring it, but then again, I know that this could mislead me in some way later in the project.
mslot
+3  A: 

The correct way to handle the situation is to choose the type based on the values it will need to represent:

  • If it's a general small integer, and the range -32767 to 32767 is OK, use int;
  • Otherwise, if the range -2147483647 to 2147483647 is OK, use long;
  • Otherwise, use long long.
  • If the range -32767 to 32767 is OK and space efficiency is important, use short (or signed char, if the range -127 to 127 is OK).

As long as you have made no other assumptions that these (ie. always using sizeof instead of assuming the width of the type), then your code will be portable.

In general, you should only need to use the fixed-width types from stdint.h for values that are being exchanged through a binary interface with another system - ie. being read from or written to the network or a file.

caf