views:

164

answers:

8

I was wondering if there is some standardised way of getting type sizes in memory at the pre-processor stage - so in macro form, sizeof() does not cut it.

If their isn't a standardised method are their conventional methods that most IDE's use anyway?

Are there any other methods that anyone can think of to get such data?

I suppose I could do a two stage build kind of thing, get the output of a test program and feed it back into the IDE, but thats not really any easier then #defining them in myself.

Thoughts?

EDIT:

I just want to be able to swap code around with

#ifdef / #endif

Was it naive of me to think that an IDE or underlying compiler might define that information under some macro? Sure the pre-proccesor doesn't get information on any actual machine code generation functions, but the IDE and the Compiler do, and they call the pre-processor and declare stuff to it in advance.

EDIT FURTHER

What I imagined as a conceivable concept was this:

The C++ Committee has a standard that says for every type (perhaps only those native to C++) the compiler has to give to the IDE a header file, included by default that declares the size in memory that ever native type uses, like so:

#define CHAR_SIZE 8
#define INT_SIZE 32
#define SHORT_INT_SIZE 16
#define FLOAT_SIZE 32
// ect

Is there a flaw in this process somewhere?

EDIT EVEN FURTHER

In order to get across the multi-platform build stage problem, perhaps this standard could mandate that a simple program like the one shown by lacqui would be required to compile and run be run by default, this way, whatever that gets type sizes will be the same machine that compiles the code in the second or 'normal' build stage.

Apologies:

I've been using 'Variable' instead of 'Type'

+3  A: 

Sorry, this information isn't available at the preprocessor stage. To compute the size of a variable you have to do just about all the work of parsing and abstract evaluation - not quite code generation, but you have to be able to evaluate constant-expressions and substitute template parameters, for instance. And you have to know considerably more about the code generation target than the preprocessor usually does.

The two-stage build thing is what most people do in practice, I think. Some IDEs have an entire compiler built into them as a library, which lets them do things more efficiently.

Zack
+1  A: 

How would that work? The size isn't known at the preprocessing stage. At that point, you only have the source code. The only way to find the size of a type is to compile its definition.

You might as well ask for a way to get the result of running a program at the compilation stage. The answer is "you can't, you have to run the program to get its output". Just like you need to compile the program in order to get the output from the compiler.

What are you trying to do?

Regarding your edit, it still seems confused.

Such a header could conceivably exist for built-in types, but never for variables. A macro could perhaps be written to replace known type names with a hardcoded number, but it wouldn't know what to do if you gave it a variable name.

Once again, what are you trying to do? What is the problem you're trying to solve? There may be a sane solution to it if you give us a bit more context.

jalf
I was wondering if their were any macros that were accepted across multiple compilers or even only for a few particular ones. Is that so hard to believe? Compilers set macros for programmers all the time
Tomas Cokis
@Tomas: Not hard to believe, just impossible. Macros work on text, not on types. They don't *see* types, they just see names. The type doesn't exist until the compiler has processed the code.
jalf
@^ See new edit
Tomas Cokis
When the preprocessor sees a name, say, `Foo`, it doesn't yet know what that name *is*. Is it a class? A built-in type? A function name? A language keyword? A compiler intrinsic? The preprocessor doesn't know, it only sees the input text. It knows that it has encountered the name `Foo`, and that's all.
jalf
As I said, its all in macros, and to avoid the problem you just described it would only be for native data types. Did you even read my edit?
Tomas Cokis
@Tomas: I wrote that comment before I saw yours about the edit, so no. I didn't.
jalf
+2  A: 

No, it's not possible. Just for example, it's entirely possible to run the preprocessor on one machine, and do the compilation entirely separately on a completely different machine with (potentially) different sizes for (at least some) types.

For a concrete example, consider that the normal distribution of SQLite is what they call an "amalgamation" -- a single already-preprocessed source code file that you actually compile on your computer.

Jerry Coffin
Sure, it wouldn't be entirely 'safe' for every method of development, but this is something that to some degree happens already! I've seen code that gets the compiler in the pre-processor stage via macro and makes assumptions on variable sizes!
Tomas Cokis
@Tomas: Sure -- people do all sorts of semi- to completely-broken things all the time. A thousand broken hacks doesn't make one standardized method. In fairness, the preprocessor does give ranges of various types, so in theory something like this *could* be done under the right circumstances, but I don't think it has (or will) happen in practice.
Jerry Coffin
Sorry to be so aggressive, you are probably right in terms of developing an official standard - however surely there are other thing s that cross machine development messes with?
Tomas Cokis
Maybe some kind of double compile could be the standard however!
Tomas Cokis
A simple program like the one shown by lacqui would be required to run by default, this way, whatever that gets variable sizes will be the same machine that compiles the code in the second or 'normal' build stage.
Tomas Cokis
@Tomas: What code are you referring to? I really feel like there's a big misunderstanding here somewhere, so knowing what you mean would be helpful. Where have you seen "code that gets the compiler in the preprocessor stage"?
jalf
+3  A: 

Depending on your build environment, you may be able to write a utility program that generates a header that is included by other files:

int main(void) {
    out = make_header_file();  // defined by you
    fprintf(out, "#ifndef VARTYPES_H\n#define VARTYPES_H\n");

    size_t intsize = sizeof(int);
    if (intsize == 32)
        fprintf(out, "#define INTSIZE_32\n");
    else if (intsize == 64)
        fprintf(out, "#define INTSIZE_64\n");
    // .....
    else fprintf(out, "$define INTSIZE_UNKNOWN\n");
}

Of course, edit it as appropriate. Then include "vartypes.h" everywhere you need these definitions.

EDIT: Alternatively:

fprintf(out, "#define INTSIZE_%d\n", sizeof(int));
fprintf(out, "#define INTSIZE %d\n", sizeof(int));

Note the lack of underscore in the second one - the first creates INTSIZE_32 which can be used in #ifdef. The second creates INTSIZE, which can be used, for example char bits[INTSIZE];

lacqui
An excellent alternative!
Tomas Cokis
A: 

The term "standardized" is the problem. There's not standard way of doing it, but it's not very difficult to set some pre-processor symbols using a configuration utility of some sort. A real simple one would be compile and run a small program that checks sizes with sizeof and then outputs an include file with some symbols set.

JOTN
+2  A: 

Why do you need this anyway?

The cstdint include provides typedefs and #defines that describe all of the standard integer types, including typedefs for exact-width int types and #defines for the full value range for them.

greyfade
A: 

For common build environments, many frameworks have this set up manually. For instance,

http://www.aoc.nrao.edu/php/tjuerges/ALMA/ACE-5.5.2/html/ace/Basic__Types_8h-source.html

defines things like ACE_SIZEOF_CHAR. Another library described in a book I bought called POSH does this too, in a very includable way: http://www.hookatooka.com/wpc/

Scott Stafford
+1  A: 

You want to generate different code based on the sizes of some type? maybe you can do this with template specializations:

#include <iostream>

template <int Tsize>
struct dosomething{
  void doit() { std::cout << "generic version" << std::endl; }
};

template <>
void dosomething<sizeof(int)>::doit()
{ std::cout << "int version" << std::endl; }

template <>
void dosomething<sizeof(char)>::doit()
{ std::cout << "char version" << std::endl; }


int main(int argc, char** argv)
{
  typedef int foo;
  dosomething<sizeof(foo)> myfoo;
  myfoo.doit();

}
TokenMacGuy
Yep. This is where templates vastly outshine the preprocessor.
sbi